Test Report: Docker_macOS 14555

                    
                      f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd:2022-07-28:25060
                    
                

Test fail (24/289)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:107: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (304.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220728144449-12923 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:910: output didn't produce a URL
functional_test.go:902: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220728144449-12923 --alsologtostderr -v=1] ...
functional_test.go:902: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220728144449-12923 --alsologtostderr -v=1] stdout:
functional_test.go:902: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220728144449-12923 --alsologtostderr -v=1] stderr:
I0728 14:48:40.415162   16817 out.go:296] Setting OutFile to fd 1 ...
I0728 14:48:40.415621   16817 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 14:48:40.415628   16817 out.go:309] Setting ErrFile to fd 2...
I0728 14:48:40.415633   16817 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0728 14:48:40.415761   16817 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
I0728 14:48:40.416070   16817 mustload.go:65] Loading cluster: functional-20220728144449-12923
I0728 14:48:40.416391   16817 config.go:178] Loaded profile config "functional-20220728144449-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0728 14:48:40.416773   16817 cli_runner.go:164] Run: docker container inspect functional-20220728144449-12923 --format={{.State.Status}}
I0728 14:48:40.603019   16817 host.go:66] Checking if "functional-20220728144449-12923" exists ...
I0728 14:48:40.603531   16817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20220728144449-12923
I0728 14:48:40.671264   16817 api_server.go:165] Checking apiserver status ...
I0728 14:48:40.671355   16817 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0728 14:48:40.671425   16817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220728144449-12923
I0728 14:48:40.905939   16817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55384 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/functional-20220728144449-12923/id_rsa Username:docker}
I0728 14:48:40.994451   16817 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9691/cgroup
W0728 14:48:41.002324   16817 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/9691/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0728 14:48:41.002342   16817 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:55383/healthz ...
I0728 14:48:41.009092   16817 api_server.go:266] https://127.0.0.1:55383/healthz returned 200:
ok
W0728 14:48:41.009120   16817 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0728 14:48:41.009250   16817 config.go:178] Loaded profile config "functional-20220728144449-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0728 14:48:41.009262   16817 addons.go:65] Setting dashboard=true in profile "functional-20220728144449-12923"
I0728 14:48:41.009271   16817 addons.go:153] Setting addon dashboard=true in "functional-20220728144449-12923"
I0728 14:48:41.009289   16817 host.go:66] Checking if "functional-20220728144449-12923" exists ...
I0728 14:48:41.009650   16817 cli_runner.go:164] Run: docker container inspect functional-20220728144449-12923 --format={{.State.Status}}
I0728 14:48:41.112441   16817 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
I0728 14:48:41.154284   16817 out.go:177]   - Using image kubernetesui/metrics-scraper:v1.0.8
I0728 14:48:41.175323   16817 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0728 14:48:41.175338   16817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0728 14:48:41.175400   16817 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220728144449-12923
I0728 14:48:41.249072   16817 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55384 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/functional-20220728144449-12923/id_rsa Username:docker}
I0728 14:48:41.340901   16817 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0728 14:48:41.340914   16817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0728 14:48:41.354746   16817 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0728 14:48:41.354764   16817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0728 14:48:41.368178   16817 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0728 14:48:41.368193   16817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0728 14:48:41.382102   16817 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0728 14:48:41.382113   16817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4278 bytes)
I0728 14:48:41.395129   16817 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
I0728 14:48:41.395150   16817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0728 14:48:41.408002   16817 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0728 14:48:41.408027   16817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0728 14:48:41.420985   16817 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0728 14:48:41.420995   16817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0728 14:48:41.434171   16817 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0728 14:48:41.434182   16817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0728 14:48:41.446898   16817 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0728 14:48:41.446912   16817 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0728 14:48:41.460458   16817 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0728 14:48:41.794703   16817 addons.go:116] Writing out "functional-20220728144449-12923" config to set dashboard=true...
W0728 14:48:41.795037   16817 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0728 14:48:41.795691   16817 kapi.go:59] client config for functional-20220728144449-12923: &rest.Config{Host:"https://127.0.0.1:55383", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144
449-12923/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0728 14:48:41.804618   16817 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  2a9a47ac-811d-4d51-a7ce-d323a02a4130 843 0 2022-07-28 14:48:41 -0700 PDT <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] []  [{kubectl-client-side-apply Update v1 2022-07-28 14:48:41 -0700 PDT FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.104.186.157,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.104.186.157],IPFamilies:[IPv4],AllocateLoadBala
ncerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0728 14:48:41.804731   16817 out.go:239] * Launching proxy ...
* Launching proxy ...
I0728 14:48:41.804810   16817 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-20220728144449-12923 proxy --port 36195]
I0728 14:48:41.806641   16817 dashboard.go:157] Waiting for kubectl to output host:port ...
I0728 14:48:41.836443   16817 dashboard.go:175] proxy stdout: S arting to serve on 127. .0.1:36195
W0728 14:48:41.836492   16817 out.go:239] * Verifying proxy health ...
* Verifying proxy health ...
I0728 14:48:41.836523   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.836573   16817 retry.go:31] will retry after 110.466µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.836754   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.836772   16817 retry.go:31] will retry after 216.077µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.837030   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.837045   16817 retry.go:31] will retry after 262.026µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.837404   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.837419   16817 retry.go:31] will retry after 316.478µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.837852   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.837864   16817 retry.go:31] will retry after 468.098µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.838482   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.838495   16817 retry.go:31] will retry after 901.244µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.839659   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.839675   16817 retry.go:31] will retry after 644.295µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.840513   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.840522   16817 retry.go:31] will retry after 1.121724ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.841969   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.841982   16817 retry.go:31] will retry after 1.529966ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.843991   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.844050   16817 retry.go:31] will retry after 3.078972ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.847991   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.848077   16817 retry.go:31] will retry after 5.854223ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.854142   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.854191   16817 retry.go:31] will retry after 11.362655ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.865950   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.865991   16817 retry.go:31] will retry after 9.267303ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.875327   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.875346   16817 retry.go:31] will retry after 17.139291ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.892652   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.892711   16817 retry.go:31] will retry after 23.881489ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.916661   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.916705   16817 retry.go:31] will retry after 42.427055ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:41.959294   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:41.959328   16817 retry.go:31] will retry after 51.432832ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:42.010817   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:42.010858   16817 retry.go:31] will retry after 78.14118ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:42.089141   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:42.089175   16817 retry.go:31] will retry after 174.255803ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:42.263530   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:42.263590   16817 retry.go:31] will retry after 159.291408ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:42.423082   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:42.423115   16817 retry.go:31] will retry after 233.827468ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:42.658215   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:42.658267   16817 retry.go:31] will retry after 429.392365ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:43.089649   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:43.089686   16817 retry.go:31] will retry after 801.058534ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:43.891323   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:43.891371   16817 retry.go:31] will retry after 1.529087469s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:45.420562   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:45.437008   16817 retry.go:31] will retry after 1.335136154s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:46.772233   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:46.772289   16817 retry.go:31] will retry after 2.012724691s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:48.785191   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:48.785250   16817 retry.go:31] will retry after 4.744335389s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:53.529931   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:53.530004   16817 retry.go:31] will retry after 4.014454686s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:48:57.546307   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:48:57.546367   16817 retry.go:31] will retry after 11.635741654s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:49:09.182644   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:49:09.182786   16817 retry.go:31] will retry after 15.298130033s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:49:24.483019   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:49:24.483175   16817 retry.go:31] will retry after 19.631844237s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:49:44.114946   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:49:44.115019   16817 retry.go:31] will retry after 15.195386994s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:49:59.310990   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:49:59.311120   16817 retry.go:31] will retry after 28.402880652s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:50:27.715925   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:50:27.716051   16817 retry.go:31] will retry after 1m6.435206373s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:51:34.167206   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:51:34.167261   16817 retry.go:31] will retry after 1m28.514497132s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:53:02.683435   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:53:02.683496   16817 retry.go:31] will retry after 34.767217402s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0728 14:53:37.451397   16817 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0728 14:53:37.451446   16817 retry.go:31] will retry after 1m5.688515861s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-20220728144449-12923
helpers_test.go:235: (dbg) docker inspect functional-20220728144449-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b6d31d04aa22026774e45d3141fb7d68838a1d9aa0eea62552f843da5d307e00",
	        "Created": "2022-07-28T21:44:55.565076924Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20254,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T21:44:55.84658646Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/b6d31d04aa22026774e45d3141fb7d68838a1d9aa0eea62552f843da5d307e00/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b6d31d04aa22026774e45d3141fb7d68838a1d9aa0eea62552f843da5d307e00/hostname",
	        "HostsPath": "/var/lib/docker/containers/b6d31d04aa22026774e45d3141fb7d68838a1d9aa0eea62552f843da5d307e00/hosts",
	        "LogPath": "/var/lib/docker/containers/b6d31d04aa22026774e45d3141fb7d68838a1d9aa0eea62552f843da5d307e00/b6d31d04aa22026774e45d3141fb7d68838a1d9aa0eea62552f843da5d307e00-json.log",
	        "Name": "/functional-20220728144449-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220728144449-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220728144449-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/af3825359fb72c84d43dcb039aeeedf49b33f92511439a894f9c9de3fb4f30e9-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af3825359fb72c84d43dcb039aeeedf49b33f92511439a894f9c9de3fb4f30e9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af3825359fb72c84d43dcb039aeeedf49b33f92511439a894f9c9de3fb4f30e9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af3825359fb72c84d43dcb039aeeedf49b33f92511439a894f9c9de3fb4f30e9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220728144449-12923",
	                "Source": "/var/lib/docker/volumes/functional-20220728144449-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220728144449-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220728144449-12923",
	                "name.minikube.sigs.k8s.io": "functional-20220728144449-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ba36164b0e36f034c75d65c6ea043d5f97edc4871939afaa4b09eefa472fb4ff",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55384"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55380"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55381"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55382"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55383"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ba36164b0e36",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220728144449-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b6d31d04aa22",
	                        "functional-20220728144449-12923"
	                    ],
	                    "NetworkID": "eb43d765d0d53be69fe3a4d16417fe190a58e6b89072b2f8666fb248959564f2",
	                    "EndpointID": "7c99d6602bc73e52bc1a0b423e04406983a30a77b82b026cd2aba8fbb8a00612",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20220728144449-12923 -n functional-20220728144449-12923
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 logs -n 25: (3.17875s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------------------------------------------------------------------|---------------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                                         Args                                                         |             Profile             |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------------------------------------------------|---------------------------------|---------|---------|---------------------|---------------------|
	| service        | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | service list                                                                                                         |                                 |         |         |                     |                     |
	| service        | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | service --namespace=default                                                                                          |                                 |         |         |                     |                     |
	|                | --https --url hello-node                                                                                             |                                 |         |         |                     |                     |
	| service        | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | service hello-node --url                                                                                             |                                 |         |         |                     |                     |
	|                | --format={{.IP}}                                                                                                     |                                 |         |         |                     |                     |
	| service        | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | service hello-node --url                                                                                             |                                 |         |         |                     |                     |
	| ssh            | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | ssh stat                                                                                                             |                                 |         |         |                     |                     |
	|                | /mount-9p/created-by-test                                                                                            |                                 |         |         |                     |                     |
	| ssh            | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | ssh stat                                                                                                             |                                 |         |         |                     |                     |
	|                | /mount-9p/created-by-pod                                                                                             |                                 |         |         |                     |                     |
	| ssh            | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | ssh sudo umount -f /mount-9p                                                                                         |                                 |         |         |                     |                     |
	| mount          | -p functional-20220728144449-12923                                                                                   | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT |                     |
	|                | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2053938696/001:/mount-9p |                                 |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                                                                  |                                 |         |         |                     |                     |
	| ssh            | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT |                     |
	|                | ssh findmnt -T /mount-9p | grep                                                                                      |                                 |         |         |                     |                     |
	|                | 9p                                                                                                                   |                                 |         |         |                     |                     |
	| ssh            | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | ssh findmnt -T /mount-9p | grep                                                                                      |                                 |         |         |                     |                     |
	|                | 9p                                                                                                                   |                                 |         |         |                     |                     |
	| ssh            | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | ssh -- ls -la /mount-9p                                                                                              |                                 |         |         |                     |                     |
	| ssh            | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT |                     |
	|                | ssh sudo umount -f /mount-9p                                                                                         |                                 |         |         |                     |                     |
	| start          | -p                                                                                                                   | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT |                     |
	|                | functional-20220728144449-12923                                                                                      |                                 |         |         |                     |                     |
	|                | --dry-run --memory                                                                                                   |                                 |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                              |                                 |         |         |                     |                     |
	|                | --driver=docker                                                                                                      |                                 |         |         |                     |                     |
	| dashboard      | --url --port 36195 -p                                                                                                | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT |                     |
	|                | functional-20220728144449-12923                                                                                      |                                 |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                               |                                 |         |         |                     |                     |
	| start          | -p                                                                                                                   | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT |                     |
	|                | functional-20220728144449-12923                                                                                      |                                 |         |         |                     |                     |
	|                | --dry-run --alsologtostderr                                                                                          |                                 |         |         |                     |                     |
	|                | -v=1 --driver=docker                                                                                                 |                                 |         |         |                     |                     |
	| update-context | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | update-context                                                                                                       |                                 |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                               |                                 |         |         |                     |                     |
	| update-context | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | update-context                                                                                                       |                                 |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                               |                                 |         |         |                     |                     |
	| update-context | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | update-context                                                                                                       |                                 |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                               |                                 |         |         |                     |                     |
	| image          | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | image ls --format short                                                                                              |                                 |         |         |                     |                     |
	| image          | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | image ls --format yaml                                                                                               |                                 |         |         |                     |                     |
	| ssh            | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT |                     |
	|                | ssh pgrep buildkitd                                                                                                  |                                 |         |         |                     |                     |
	| image          | functional-20220728144449-12923 image build -t                                                                       | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | localhost/my-image:functional-20220728144449-12923                                                                   |                                 |         |         |                     |                     |
	|                | testdata/build                                                                                                       |                                 |         |         |                     |                     |
	| image          | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | image ls                                                                                                             |                                 |         |         |                     |                     |
	| image          | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | image ls --format json                                                                                               |                                 |         |         |                     |                     |
	| image          | functional-20220728144449-12923                                                                                      | functional-20220728144449-12923 | jenkins | v1.26.0 | 28 Jul 22 14:48 PDT | 28 Jul 22 14:48 PDT |
	|                | image ls --format table                                                                                              |                                 |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------------------------------------------------|---------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 14:48:40
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 14:48:40.662026   16829 out.go:296] Setting OutFile to fd 1 ...
	I0728 14:48:40.662777   16829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:48:40.662785   16829 out.go:309] Setting ErrFile to fd 2...
	I0728 14:48:40.662789   16829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:48:40.663034   16829 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 14:48:40.663799   16829 out.go:303] Setting JSON to false
	I0728 14:48:40.680677   16829 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5962,"bootTime":1659038958,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 14:48:40.680836   16829 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 14:48:40.702926   16829 out.go:177] * [functional-20220728144449-12923] minikube v1.26.0 on Darwin 12.5
	I0728 14:48:40.723647   16829 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 14:48:40.766656   16829 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 14:48:40.808590   16829 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 14:48:40.850545   16829 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 14:48:40.893363   16829 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 14:48:40.914973   16829 config.go:178] Loaded profile config "functional-20220728144449-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 14:48:40.915339   16829 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 14:48:40.983637   16829 docker.go:137] docker version: linux-20.10.17
	I0728 14:48:40.983842   16829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 14:48:41.156574   16829 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-28 21:48:41.058566222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 14:48:41.217197   16829 out.go:177] * Using the docker driver based on existing profile
	I0728 14:48:41.238306   16829 start.go:284] selected driver: docker
	I0728 14:48:41.238332   16829 start.go:808] validating driver "docker" against &{Name:functional-20220728144449-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220728144449-12923 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 14:48:41.238515   16829 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 14:48:41.238686   16829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 14:48:41.374270   16829 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-28 21:48:41.313947515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 14:48:41.376463   16829 cni.go:95] Creating CNI manager for ""
	I0728 14:48:41.376485   16829 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 14:48:41.376498   16829 start_flags.go:310] config:
	{Name:functional-20220728144449-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220728144449-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:f
alse storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 14:48:41.418675   16829 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 21:44:55 UTC, end at Thu 2022-07-28 21:53:41 UTC. --
	Jul 28 21:46:48 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:46:48.890868484Z" level=info msg="Loading containers: done."
	Jul 28 21:46:48 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:46:48.899835549Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 28 21:46:48 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:46:48.899910571Z" level=info msg="Daemon has completed initialization"
	Jul 28 21:46:48 functional-20220728144449-12923 systemd[1]: Started Docker Application Container Engine.
	Jul 28 21:46:48 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:46:48.923817796Z" level=info msg="API listen on [::]:2376"
	Jul 28 21:46:48 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:46:48.927063371Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 28 21:46:50 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:46:50.437286119Z" level=info msg="ignoring event" container=72c9f6ee89970ba5a9adb4028b7fba2c60edf129f99898b0e88102eece5cc499 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:46:50 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:46:50.437381315Z" level=info msg="ignoring event" container=aa13ef4f57dcdebaded87d9273511a1638eafaa2256b8f0c72f75f013f1ceb95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:46:50 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:46:50.438358885Z" level=info msg="ignoring event" container=36a1d4d7f9f260c9a215ec05fdbfda934c1dc1c2f0d8e2a02c1fd2499f721da9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:46:50 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:46:50.438396735Z" level=info msg="ignoring event" container=6165fc5bac0b046b4dfb1817eaaab01853dfc66f02687411e8578fbd53150930 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:46:50 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:46:50.449043350Z" level=info msg="ignoring event" container=0d36ba941a9995111591305b838e8a47b19c7d2cd3d6a299b3e25870fdc9fdc8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:46:50 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:46:50.455486741Z" level=info msg="ignoring event" container=f15065198e483e23b7488c50ce49a841544034b48dab2247e600a6e05c3123c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:46:50 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:46:50.455845977Z" level=info msg="ignoring event" container=554dcb83a4c44f21ea0182b0b73220b660d519e161188554150f751c35f33ec1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:47:00 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:47:00.375817402Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=88390e98e47f1473933bccecf3f5a570f3e0a1f8e61a131ff4471202c603ff47
	Jul 28 21:47:00 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:47:00.401899392Z" level=info msg="ignoring event" container=88390e98e47f1473933bccecf3f5a570f3e0a1f8e61a131ff4471202c603ff47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:47:05 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:47:05.052544867Z" level=info msg="ignoring event" container=4e97b02d55889154f24f9fdb48e7be7c142ce8192ea78f3f6bf612d99da7e8f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:47:06 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:47:06.000260485Z" level=info msg="ignoring event" container=ed6973b8a6d6ece9bc3a3daa55de70e7bbc14e74350113ee1d9f8b839e362154 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:48:19 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:48:19.615823157Z" level=info msg="ignoring event" container=1ba32b778e0522a2c3d9dbf35296e219cddc16bd32acf9ed3a7bd3d81297abf3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:48:19 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:48:19.670505716Z" level=info msg="ignoring event" container=0d3228418296b5091f5c23b44abf8790a8975445ffad5f650c35846cb0456d37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:48:33 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:48:33.725401140Z" level=info msg="ignoring event" container=4b46ccf38ee81c06ff73a2641712b15e3f4cf412c3c40a01eb1d84e21b665a2c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:48:35 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:48:35.258495594Z" level=info msg="ignoring event" container=882ff8991f8ea58d2e560cfa0c6c2c27101faf621f21cd748ba9755e0a4b6dd6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:48:42 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:48:42.930742397Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 28 21:48:47 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:48:47.117252616Z" level=info msg="ignoring event" container=92320a677e3de6a456821a7b3b72ea0b71df7059d5201c05ff85aee300ebc589 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 21:48:47 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:48:47.322830766Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	Jul 28 21:48:49 functional-20220728144449-12923 dockerd[7429]: time="2022-07-28T21:48:49.616003402Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                  CREATED             STATE               NAME                        ATTEMPT             POD ID
	68d1558937aef       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   4 minutes ago       Running             dashboard-metrics-scraper   0                   3397071b2afee
	bf9d6f3d8f628       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3         4 minutes ago       Running             kubernetes-dashboard        0                   96fcd01bf34ac
	4b46ccf38ee81       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    5 minutes ago       Exited              mount-munger                0                   882ff8991f8ea
	5456ee4215b51       82e4c8a736a4f                                                                                          5 minutes ago       Running             echoserver                  0                   1085ce381835a
	783c18600128c       nginx@sha256:1761fb5661e4d77e107427d8012ad3a5955007d997e0f4a3d41acc9ff20467c7                          5 minutes ago       Running             myfrontend                  0                   e149cb21c2aec
	3163711af75c5       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969          5 minutes ago       Running             echoserver                  0                   ba8fdbd263482
	331bdd505c951       nginx@sha256:87fb6f4040ffd52dd616f360b8520ed4482930ea75417182ad3f76c4aaadf24f                          5 minutes ago       Running             nginx                       0                   ff9d5c6069a4d
	528c3ff58b368       mysql@sha256:b3a86578a582617214477d91e47e850f9e18df0b5d1644fb2d96d91a340b8972                          5 minutes ago       Running             mysql                       0                   6f2c14a4c9cea
	b584628b303eb       a4ca41631cc7a                                                                                          6 minutes ago       Running             coredns                     3                   ac09e2888dacb
	ca8d6ee6be3f6       6e38f40d628db                                                                                          6 minutes ago       Running             storage-provisioner         4                   c6f7e38896881
	b9aa2328969a2       2ae1ba6417cbc                                                                                          6 minutes ago       Running             kube-proxy                  3                   19595682bc159
	6f1ac2160f5a9       586c112956dfc                                                                                          6 minutes ago       Running             kube-controller-manager     4                   32fff227e14a7
	b7e406ef17780       d521dd763e2e3                                                                                          6 minutes ago       Running             kube-apiserver              0                   47197d3d675c4
	909bfd3779583       3a5aa3a515f5d                                                                                          6 minutes ago       Running             kube-scheduler              3                   287ced331c101
	745ed1fdef657       aebe758cef4cd                                                                                          6 minutes ago       Running             etcd                        3                   f10809b909407
	4e97b02d55889       586c112956dfc                                                                                          6 minutes ago       Exited              kube-controller-manager     3                   32fff227e14a7
	88390e98e47f1       a4ca41631cc7a                                                                                          6 minutes ago       Exited              coredns                     2                   72c9f6ee89970
	48b1c6341c821       3a5aa3a515f5d                                                                                          7 minutes ago       Exited              kube-scheduler              2                   bef96eafcf3f1
	20d5987c4568d       6e38f40d628db                                                                                          7 minutes ago       Exited              storage-provisioner         3                   588dadc44ee40
	98a7f7d7a7a4b       2ae1ba6417cbc                                                                                          7 minutes ago       Exited              kube-proxy                  2                   2e1193292f69f
	d45776c7f7dbd       aebe758cef4cd                                                                                          7 minutes ago       Exited              etcd                        2                   8548be0763eb7
	
	* 
	* ==> coredns [88390e98e47f] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [b584628b303e] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220728144449-12923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220728144449-12923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
	                    minikube.k8s.io/name=functional-20220728144449-12923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_28T14_45_14_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 21:45:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220728144449-12923
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 21:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Jul 2022 21:49:07 +0000   Thu, 28 Jul 2022 21:45:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Jul 2022 21:49:07 +0000   Thu, 28 Jul 2022 21:45:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Jul 2022 21:49:07 +0000   Thu, 28 Jul 2022 21:45:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Jul 2022 21:49:07 +0000   Thu, 28 Jul 2022 21:45:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220728144449-12923
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                a5b5349e-67ca-42a7-b34d-b1aed4f9c919
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54c4b5c49f-lr6xw                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  default                     hello-node-connect-578cdc45cb-zds9n                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	  default                     mysql-67f7d69d8b-nmcfj                                     600m (10%!)(MISSING)    700m (11%!)(MISSING)  512Mi (8%!)(MISSING)       700Mi (11%!)(MISSING)    6m5s
	  default                     nginx-svc                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m45s
	  default                     sp-pod                                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 coredns-6d4b75cb6d-5gzlj                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     8m15s
	  kube-system                 etcd-functional-20220728144449-12923                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m28s
	  kube-system                 kube-apiserver-functional-20220728144449-12923             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-controller-manager-functional-20220728144449-12923    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 kube-proxy-86779                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-scheduler-functional-20220728144449-12923             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kubernetes-dashboard        dashboard-metrics-scraper-78dbd9dbf5-95cr4                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-zmdxg                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (22%!)(MISSING)  700m (11%!)(MISSING)
	  memory             682Mi (11%!)(MISSING)  870Mi (14%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m14s                  kube-proxy       
	  Normal  Starting                 7m42s                  kube-proxy       
	  Normal  Starting                 8m2s                   kube-proxy       
	  Normal  Starting                 6m35s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m38s (x5 over 8m39s)  kubelet          Node functional-20220728144449-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m38s (x4 over 8m39s)  kubelet          Node functional-20220728144449-12923 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m38s (x5 over 8m39s)  kubelet          Node functional-20220728144449-12923 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m28s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m28s                  kubelet          Node functional-20220728144449-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m28s                  kubelet          Node functional-20220728144449-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m28s                  kubelet          Node functional-20220728144449-12923 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m18s                  kubelet          Node functional-20220728144449-12923 status is now: NodeReady
	  Normal  RegisteredNode           8m15s                  node-controller  Node functional-20220728144449-12923 event: Registered Node functional-20220728144449-12923 in Controller
	  Normal  RegisteredNode           7m29s                  node-controller  Node functional-20220728144449-12923 event: Registered Node functional-20220728144449-12923 in Controller
	  Normal  NodeAllocatableEnforced  6m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m40s (x8 over 6m41s)  kubelet          Node functional-20220728144449-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x8 over 6m41s)  kubelet          Node functional-20220728144449-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m40s (x7 over 6m41s)  kubelet          Node functional-20220728144449-12923 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m25s                  node-controller  Node functional-20220728144449-12923 event: Registered Node functional-20220728144449-12923 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001439] FS-Cache: O-key=[8] '377f2e0300000000'
	[  +0.001052] FS-Cache: N-cookie c=000000003d1587d4 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001741] FS-Cache: N-cookie d=00000000b3020e27 n=00000000e968409d
	[  +0.001451] FS-Cache: N-key=[8] '377f2e0300000000'
	[  +0.001921] FS-Cache: Duplicate cookie detected
	[  +0.001022] FS-Cache: O-cookie c=00000000fc272a13 [p=00000000a81fea9c fl=226 nc=0 na=1]
	[  +0.001775] FS-Cache: O-cookie d=00000000b3020e27 n=0000000005e7aa76
	[  +0.001457] FS-Cache: O-key=[8] '377f2e0300000000'
	[  +0.001097] FS-Cache: N-cookie c=000000003d1587d4 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001747] FS-Cache: N-cookie d=00000000b3020e27 n=00000000992429b3
	[  +0.001462] FS-Cache: N-key=[8] '377f2e0300000000'
	[  +3.054735] FS-Cache: Duplicate cookie detected
	[  +0.001042] FS-Cache: O-cookie c=00000000d2d2cc51 [p=00000000a81fea9c fl=226 nc=0 na=1]
	[  +0.001759] FS-Cache: O-cookie d=00000000b3020e27 n=0000000030e50417
	[  +0.001442] FS-Cache: O-key=[8] '367f2e0300000000'
	[  +0.001131] FS-Cache: N-cookie c=000000007bcf2158 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001760] FS-Cache: N-cookie d=00000000b3020e27 n=00000000d445df4c
	[  +0.001503] FS-Cache: N-key=[8] '367f2e0300000000'
	[  +0.439912] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=000000000a15bb65 [p=00000000a81fea9c fl=226 nc=0 na=1]
	[  +0.001773] FS-Cache: O-cookie d=00000000b3020e27 n=00000000a4d7a621
	[  +0.001447] FS-Cache: O-key=[8] '3e7f2e0300000000'
	[  +0.001103] FS-Cache: N-cookie c=000000001f485fd0 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001738] FS-Cache: N-cookie d=00000000b3020e27 n=000000000eab18f1
	[  +0.001440] FS-Cache: N-key=[8] '3e7f2e0300000000'
	
	* 
	* ==> etcd [745ed1fdef65] <==
	* {"level":"info","ts":"2022-07-28T21:46:53.390Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-07-28T21:46:53.391Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-28T21:46:53.392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-07-28T21:46:53.393Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-07-28T21:46:53.393Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T21:46:53.393Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T21:46:53.393Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-28T21:46:53.393Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-07-28T21:46:53.394Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-07-28T21:46:53.394Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T21:46:53.394Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T21:46:54.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2022-07-28T21:46:54.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2022-07-28T21:46:54.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2022-07-28T21:46:54.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 5"}
	{"level":"info","ts":"2022-07-28T21:46:54.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2022-07-28T21:46:54.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 5"}
	{"level":"info","ts":"2022-07-28T21:46:54.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 5"}
	{"level":"info","ts":"2022-07-28T21:46:54.589Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220728144449-12923 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T21:46:54.589Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T21:46:54.590Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T21:46:54.590Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T21:46:54.590Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T21:46:54.590Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-07-28T21:46:54.593Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [d45776c7f7db] <==
	* {"level":"info","ts":"2022-07-28T21:45:56.623Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T21:45:56.624Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-07-28T21:45:56.624Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-07-28T21:45:58.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2022-07-28T21:45:58.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2022-07-28T21:45:58.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2022-07-28T21:45:58.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2022-07-28T21:45:58.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2022-07-28T21:45:58.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2022-07-28T21:45:58.016Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2022-07-28T21:45:58.018Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220728144449-12923 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T21:45:58.018Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T21:45:58.018Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T21:45:58.019Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T21:45:58.019Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T21:45:58.020Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-07-28T21:45:58.020Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T21:46:33.079Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-28T21:46:33.079Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"functional-20220728144449-12923","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	WARNING: 2022/07/28 21:46:33 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/28 21:46:33 [core] grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 192.168.49.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-28T21:46:33.088Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2022-07-28T21:46:33.090Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-07-28T21:46:33.091Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-07-28T21:46:33.091Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"functional-20220728144449-12923","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  21:53:42 up 14 min,  0 users,  load average: 0.25, 0.50, 0.37
	Linux functional-20220728144449-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [b7e406ef1778] <==
	* I0728 21:47:05.032658       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0728 21:47:05.033338       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0728 21:47:05.033766       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0728 21:47:05.034154       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0728 21:47:05.034787       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0728 21:47:05.040515       1 cache.go:39] Caches are synced for autoregister controller
	I0728 21:47:05.040700       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0728 21:47:05.720332       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0728 21:47:05.937980       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0728 21:47:06.552216       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0728 21:47:06.557242       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0728 21:47:06.578482       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0728 21:47:06.587415       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0728 21:47:06.592301       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0728 21:47:07.216825       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0728 21:47:24.563321       1 controller.go:611] quota admission added evaluator for: endpoints
	I0728 21:47:37.905776       1 alloc.go:327] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.104.90.166]
	I0728 21:47:37.912973       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0728 21:47:37.918001       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0728 21:47:57.315491       1 alloc.go:327] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.100.67.141]
	I0728 21:48:08.887918       1 alloc.go:327] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.107.168.26]
	I0728 21:48:22.984626       1 alloc.go:327] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.104.97.32]
	I0728 21:48:41.630828       1 controller.go:611] quota admission added evaluator for: namespaces
	I0728 21:48:41.790885       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.104.186.157]
	I0728 21:48:41.802753       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.109.69.221]
	
	* 
	* ==> kube-controller-manager [4e97b02d5588] <==
	* 	/usr/local/go/src/bytes/buffer.go:204 +0x98
	crypto/tls.(*Conn).readFromUntil(0xc000b84000, {0x4cff140?, 0xc000590020}, 0x931?)
		/usr/local/go/src/crypto/tls/conn.go:807 +0xe5
	crypto/tls.(*Conn).readRecordOrCCS(0xc000b84000, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:614 +0x116
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:582
	crypto/tls.(*Conn).Read(0xc000b84000, {0xc000c33000, 0x1000, 0x9197e0?})
		/usr/local/go/src/crypto/tls/conn.go:1285 +0x16f
	bufio.(*Reader).Read(0xc00030b860, {0xc0001582e0, 0x9, 0x9361a2?})
		/usr/local/go/src/bufio/bufio.go:236 +0x1b4
	io.ReadAtLeast({0x4cf6a40, 0xc00030b860}, {0xc0001582e0, 0x9, 0x9}, 0x9)
		/usr/local/go/src/io/io.go:331 +0x9a
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:350
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc0001582e0?, 0x9?, 0xc001bc29c0?}, {0x4cf6a40?, 0xc00030b860?})
		vendor/golang.org/x/net/http2/frame.go:237 +0x6e
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001582a0)
		vendor/golang.org/x/net/http2/frame.go:498 +0x95
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000beff98)
		vendor/golang.org/x/net/http2/transport.go:2101 +0x130
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc00014aa80)
		vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		vendor/golang.org/x/net/http2/transport.go:725 +0xa65
	
	* 
	* ==> kube-controller-manager [6f1ac2160f5a] <==
	* I0728 21:47:37.916728       1 event.go:294] "Event occurred" object="default/mysql" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-67f7d69d8b to 1"
	I0728 21:47:37.935487       1 event.go:294] "Event occurred" object="default/mysql-67f7d69d8b" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-67f7d69d8b-nmcfj"
	I0728 21:48:08.270949       1 event.go:294] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0728 21:48:08.818855       1 event.go:294] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-578cdc45cb to 1"
	I0728 21:48:08.821026       1 event.go:294] "Event occurred" object="default/hello-node-connect-578cdc45cb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-578cdc45cb-zds9n"
	I0728 21:48:22.937963       1 event.go:294] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54c4b5c49f to 1"
	I0728 21:48:22.941338       1 event.go:294] "Event occurred" object="default/hello-node-54c4b5c49f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54c4b5c49f-lr6xw"
	I0728 21:48:41.659706       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-78dbd9dbf5 to 1"
	I0728 21:48:41.666466       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-78dbd9dbf5-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 21:48:41.671827       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" failed with pods "dashboard-metrics-scraper-78dbd9dbf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 21:48:41.673348       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0728 21:48:41.681044       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" failed with pods "dashboard-metrics-scraper-78dbd9dbf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 21:48:41.681088       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 21:48:41.681454       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-78dbd9dbf5-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 21:48:41.685274       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 21:48:41.686335       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" failed with pods "dashboard-metrics-scraper-78dbd9dbf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 21:48:41.686426       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-78dbd9dbf5-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 21:48:41.690574       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 21:48:41.690575       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 21:48:41.694304       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 21:48:41.694346       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 21:48:41.697176       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" failed with pods "dashboard-metrics-scraper-78dbd9dbf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 21:48:41.697256       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-78dbd9dbf5-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 21:48:41.704536       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-zmdxg"
	I0728 21:48:41.773375       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-78dbd9dbf5-95cr4"
	
	* 
	* ==> kube-proxy [98a7f7d7a7a4] <==
	* E0728 21:45:56.686084       1 node.go:152] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-20220728144449-12923": dial tcp 192.168.49.2:8441: connect: connection refused
	I0728 21:45:59.918253       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0728 21:45:59.918291       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0728 21:45:59.918312       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 21:45:59.998311       1 server_others.go:206] "Using iptables Proxier"
	I0728 21:45:59.998362       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 21:45:59.998374       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 21:45:59.998384       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 21:45:59.998401       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 21:45:59.998556       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 21:45:59.999202       1 server.go:661] "Version info" version="v1.24.3"
	I0728 21:45:59.999295       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 21:46:00.000572       1 config.go:317] "Starting service config controller"
	I0728 21:46:00.000629       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 21:46:00.000765       1 config.go:226] "Starting endpoint slice config controller"
	I0728 21:46:00.000808       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 21:46:00.000891       1 config.go:444] "Starting node config controller"
	I0728 21:46:00.000915       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 21:46:00.100730       1 shared_informer.go:262] Caches are synced for service config
	I0728 21:46:00.100855       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 21:46:00.101243       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [b9aa2328969a] <==
	* I0728 21:47:07.185768       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0728 21:47:07.185821       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0728 21:47:07.185841       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 21:47:07.213315       1 server_others.go:206] "Using iptables Proxier"
	I0728 21:47:07.213373       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 21:47:07.214043       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 21:47:07.214099       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 21:47:07.214120       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 21:47:07.214232       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 21:47:07.214460       1 server.go:661] "Version info" version="v1.24.3"
	I0728 21:47:07.214488       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 21:47:07.214933       1 config.go:317] "Starting service config controller"
	I0728 21:47:07.214965       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 21:47:07.215014       1 config.go:444] "Starting node config controller"
	I0728 21:47:07.215020       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 21:47:07.215267       1 config.go:226] "Starting endpoint slice config controller"
	I0728 21:47:07.215302       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 21:47:07.315288       1 shared_informer.go:262] Caches are synced for node config
	I0728 21:47:07.315341       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 21:47:07.315447       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [48b1c6341c82] <==
	* I0728 21:46:38.031523       1 serving.go:348] Generated self-signed cert in-memory
	
	* 
	* ==> kube-scheduler [909bfd377958] <==
	* I0728 21:47:02.818598       1 serving.go:348] Generated self-signed cert in-memory
	W0728 21:47:04.974519       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0728 21:47:04.974546       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0728 21:47:04.974555       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0728 21:47:04.974560       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0728 21:47:05.038632       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0728 21:47:05.038666       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 21:47:05.039562       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0728 21:47:05.039659       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0728 21:47:05.039670       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 21:47:05.039680       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0728 21:47:05.141356       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 21:44:55 UTC, end at Thu 2022-07-28 21:53:43 UTC. --
	Jul 28 21:48:20 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:20.429093    9390 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrrmg\" (UniqueName: \"kubernetes.io/projected/58975b1b-5f70-4766-b43c-d227d370e24d-kube-api-access-vrrmg\") pod \"sp-pod\" (UID: \"58975b1b-5f70-4766-b43c-d227d370e24d\") " pod="default/sp-pod"
	Jul 28 21:48:21 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:21.914122    9390 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=042a20cc-9cd4-479c-83c6-ce3971243a2f path="/var/lib/kubelet/pods/042a20cc-9cd4-479c-83c6-ce3971243a2f/volumes"
	Jul 28 21:48:22 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:22.948709    9390 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 21:48:23 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:23.048793    9390 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmjlc\" (UniqueName: \"kubernetes.io/projected/fe8feb67-fdfc-461e-b5f3-f90436cc4c15-kube-api-access-tmjlc\") pod \"hello-node-54c4b5c49f-lr6xw\" (UID: \"fe8feb67-fdfc-461e-b5f3-f90436cc4c15\") " pod="default/hello-node-54c4b5c49f-lr6xw"
	Jul 28 21:48:31 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:31.303993    9390 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 21:48:31 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:31.423165    9390 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-787zp\" (UniqueName: \"kubernetes.io/projected/cd336173-37bc-4fa1-a215-1044cc988e09-kube-api-access-787zp\") pod \"busybox-mount\" (UID: \"cd336173-37bc-4fa1-a215-1044cc988e09\") " pod="default/busybox-mount"
	Jul 28 21:48:31 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:31.423222    9390 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/cd336173-37bc-4fa1-a215-1044cc988e09-test-volume\") pod \"busybox-mount\" (UID: \"cd336173-37bc-4fa1-a215-1044cc988e09\") " pod="default/busybox-mount"
	Jul 28 21:48:35 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:35.456033    9390 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-787zp\" (UniqueName: \"kubernetes.io/projected/cd336173-37bc-4fa1-a215-1044cc988e09-kube-api-access-787zp\") pod \"cd336173-37bc-4fa1-a215-1044cc988e09\" (UID: \"cd336173-37bc-4fa1-a215-1044cc988e09\") "
	Jul 28 21:48:35 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:35.456105    9390 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/cd336173-37bc-4fa1-a215-1044cc988e09-test-volume\") pod \"cd336173-37bc-4fa1-a215-1044cc988e09\" (UID: \"cd336173-37bc-4fa1-a215-1044cc988e09\") "
	Jul 28 21:48:35 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:35.456166    9390 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd336173-37bc-4fa1-a215-1044cc988e09-test-volume" (OuterVolumeSpecName: "test-volume") pod "cd336173-37bc-4fa1-a215-1044cc988e09" (UID: "cd336173-37bc-4fa1-a215-1044cc988e09"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jul 28 21:48:35 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:35.458302    9390 operation_generator.go:856] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd336173-37bc-4fa1-a215-1044cc988e09-kube-api-access-787zp" (OuterVolumeSpecName: "kube-api-access-787zp") pod "cd336173-37bc-4fa1-a215-1044cc988e09" (UID: "cd336173-37bc-4fa1-a215-1044cc988e09"). InnerVolumeSpecName "kube-api-access-787zp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 28 21:48:35 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:35.558691    9390 reconciler.go:312] "Volume detached for volume \"kube-api-access-787zp\" (UniqueName: \"kubernetes.io/projected/cd336173-37bc-4fa1-a215-1044cc988e09-kube-api-access-787zp\") on node \"functional-20220728144449-12923\" DevicePath \"\""
	Jul 28 21:48:35 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:35.558785    9390 reconciler.go:312] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/cd336173-37bc-4fa1-a215-1044cc988e09-test-volume\") on node \"functional-20220728144449-12923\" DevicePath \"\""
	Jul 28 21:48:36 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:36.243308    9390 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="882ff8991f8ea58d2e560cfa0c6c2c27101faf621f21cd748ba9755e0a4b6dd6"
	Jul 28 21:48:41 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:41.708468    9390 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 21:48:41 functional-20220728144449-12923 kubelet[9390]: E0728 21:48:41.708551    9390 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="cd336173-37bc-4fa1-a215-1044cc988e09" containerName="mount-munger"
	Jul 28 21:48:41 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:41.708581    9390 memory_manager.go:345] "RemoveStaleState removing state" podUID="cd336173-37bc-4fa1-a215-1044cc988e09" containerName="mount-munger"
	Jul 28 21:48:41 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:41.779590    9390 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 21:48:41 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:41.803164    9390 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/70c897c4-3384-437c-8e5a-72be312a4cde-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-zmdxg\" (UID: \"70c897c4-3384-437c-8e5a-72be312a4cde\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-zmdxg"
	Jul 28 21:48:41 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:41.803232    9390 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdq9w\" (UniqueName: \"kubernetes.io/projected/70c897c4-3384-437c-8e5a-72be312a4cde-kube-api-access-bdq9w\") pod \"kubernetes-dashboard-5fd5574d9f-zmdxg\" (UID: \"70c897c4-3384-437c-8e5a-72be312a4cde\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-zmdxg"
	Jul 28 21:48:41 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:41.904370    9390 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76qgv\" (UniqueName: \"kubernetes.io/projected/b8860d41-67ef-467f-bbee-3534aa9f1fbe-kube-api-access-76qgv\") pod \"dashboard-metrics-scraper-78dbd9dbf5-95cr4\" (UID: \"b8860d41-67ef-467f-bbee-3534aa9f1fbe\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5-95cr4"
	Jul 28 21:48:41 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:41.904442    9390 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b8860d41-67ef-467f-bbee-3534aa9f1fbe-tmp-volume\") pod \"dashboard-metrics-scraper-78dbd9dbf5-95cr4\" (UID: \"b8860d41-67ef-467f-bbee-3534aa9f1fbe\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-78dbd9dbf5-95cr4"
	Jul 28 21:48:42 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:42.611792    9390 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="96fcd01bf34ac7c264c47450c605f09bf957ec50ca01fb951f89597b35302a90"
	Jul 28 21:48:42 functional-20220728144449-12923 kubelet[9390]: I0728 21:48:42.653029    9390 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3397071b2afeebc63bea5f250118060d1ca39e49efa688f15e2ccb9264433964"
	Jul 28 21:52:01 functional-20220728144449-12923 kubelet[9390]: W0728 21:52:01.752967    9390 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> kubernetes-dashboard [bf9d6f3d8f62] <==
	* 2022/07/28 21:48:49 Using namespace: kubernetes-dashboard
	2022/07/28 21:48:49 Using in-cluster config to connect to apiserver
	2022/07/28 21:48:49 Using secret token for csrf signing
	2022/07/28 21:48:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/28 21:48:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/28 21:48:49 Successful initial request to the apiserver, version: v1.24.3
	2022/07/28 21:48:49 Generating JWE encryption key
	2022/07/28 21:48:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/28 21:48:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/28 21:48:49 Initializing JWE encryption key from synchronized object
	2022/07/28 21:48:49 Creating in-cluster Sidecar client
	2022/07/28 21:48:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/28 21:48:49 Serving insecurely on HTTP port: 9090
	2022/07/28 21:49:19 Successful request to sidecar
	2022/07/28 21:48:49 Starting overwatch
	
	* 
	* ==> storage-provisioner [20d5987c4568] <==
	* I0728 21:46:22.595040       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 21:46:22.607155       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 21:46:22.607198       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [ca8d6ee6be3f] <==
	* I0728 21:47:07.159402       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 21:47:07.166020       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 21:47:07.166046       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 21:47:24.564595       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 21:47:24.564789       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220728144449-12923_39e26a57-916d-4b36-b460-c971fd405953!
	I0728 21:47:24.564775       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1aef058d-755c-4433-98ab-b65988a6ceac", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220728144449-12923_39e26a57-916d-4b36-b460-c971fd405953 became leader
	I0728 21:47:24.665162       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220728144449-12923_39e26a57-916d-4b36-b460-c971fd405953!
	I0728 21:48:08.270395       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0728 21:48:08.270448       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    59503d53-7183-444d-9c58-b8a63f764d81 371 0 2022-07-28 21:45:29 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-07-28 21:45:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-cdc98d5d-9204-4dcc-b587-30a74ae4cfc1 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  cdc98d5d-9204-4dcc-b587-30a74ae4cfc1 675 0 2022-07-28 21:48:08 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2022-07-28 21:48:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2022-07-28 21:48:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0728 21:48:08.270738       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-cdc98d5d-9204-4dcc-b587-30a74ae4cfc1" provisioned
	I0728 21:48:08.270769       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0728 21:48:08.270774       1 volume_store.go:212] Trying to save persistentvolume "pvc-cdc98d5d-9204-4dcc-b587-30a74ae4cfc1"
	I0728 21:48:08.273434       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"cdc98d5d-9204-4dcc-b587-30a74ae4cfc1", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0728 21:48:08.277360       1 volume_store.go:219] persistentvolume "pvc-cdc98d5d-9204-4dcc-b587-30a74ae4cfc1" saved
	I0728 21:48:08.277582       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"cdc98d5d-9204-4dcc-b587-30a74ae4cfc1", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-cdc98d5d-9204-4dcc-b587-30a74ae4cfc1
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20220728144449-12923 -n functional-20220728144449-12923
helpers_test.go:261: (dbg) Run:  kubectl --context functional-20220728144449-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-mount
helpers_test.go:272: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context functional-20220728144449-12923 describe pod busybox-mount
helpers_test.go:280: (dbg) kubectl --context functional-20220728144449-12923 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:         busybox-mount
	Namespace:    default
	Priority:     0
	Node:         functional-20220728144449-12923/192.168.49.2
	Start Time:   Thu, 28 Jul 2022 14:48:31 -0700
	Labels:       integration-test=busybox-mount
	Annotations:  <none>
	Status:       Succeeded
	IP:           172.17.0.8
	IPs:
	  IP:  172.17.0.8
	Containers:
	  mount-munger:
	    Container ID:  docker://4b46ccf38ee81c06ff73a2641712b15e3f4cf412c3c40a01eb1d84e21b665a2c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 28 Jul 2022 14:48:33 -0700
	      Finished:     Thu, 28 Jul 2022 14:48:33 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-787zp (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-787zp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m13s  default-scheduler  Successfully assigned default/busybox-mount to functional-20220728144449-12923
	  Normal  Pulling    5m13s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.740543338s
	  Normal  Created    5m11s  kubelet            Created container mount-munger
	  Normal  Started    5m11s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestFunctional/parallel/DashboardCmd FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/DashboardCmd (304.36s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (253.93s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220728145348-12923 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0728 14:57:13.960990   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:57:37.914707   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:57:37.921128   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:57:37.933344   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:57:37.953537   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:57:37.993993   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:57:38.074933   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:57:38.235088   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:57:38.557386   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:57:39.199629   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:57:40.479926   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:57:43.040140   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:57:48.162395   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:57:58.404558   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220728145348-12923 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m13.899679651s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-20220728145348-12923] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-20220728145348-12923 in cluster ingress-addon-legacy-20220728145348-12923
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 14:53:48.538558   17341 out.go:296] Setting OutFile to fd 1 ...
	I0728 14:53:48.538704   17341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:53:48.538709   17341 out.go:309] Setting ErrFile to fd 2...
	I0728 14:53:48.538713   17341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:53:48.538822   17341 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 14:53:48.539406   17341 out.go:303] Setting JSON to false
	I0728 14:53:48.554336   17341 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":6270,"bootTime":1659038958,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 14:53:48.554450   17341 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 14:53:48.597635   17341 out.go:177] * [ingress-addon-legacy-20220728145348-12923] minikube v1.26.0 on Darwin 12.5
	I0728 14:53:48.619660   17341 notify.go:193] Checking for updates...
	I0728 14:53:48.641668   17341 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 14:53:48.663642   17341 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 14:53:48.706407   17341 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 14:53:48.727910   17341 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 14:53:48.749643   17341 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 14:53:48.772943   17341 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 14:53:48.840727   17341 docker.go:137] docker version: linux-20.10.17
	I0728 14:53:48.840836   17341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 14:53:48.971624   17341 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-28 21:53:48.897329109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 14:53:48.992855   17341 out.go:177] * Using the docker driver based on user configuration
	I0728 14:53:49.035909   17341 start.go:284] selected driver: docker
	I0728 14:53:49.035933   17341 start.go:808] validating driver "docker" against <nil>
	I0728 14:53:49.035959   17341 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 14:53:49.039306   17341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 14:53:49.171720   17341 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-28 21:53:49.096607239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 14:53:49.171886   17341 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0728 14:53:49.172040   17341 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 14:53:49.192732   17341 out.go:177] * Using Docker Desktop driver with root privileges
	I0728 14:53:49.214065   17341 cni.go:95] Creating CNI manager for ""
	I0728 14:53:49.214094   17341 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 14:53:49.214111   17341 start_flags.go:310] config:
	{Name:ingress-addon-legacy-20220728145348-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220728145348-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 14:53:49.235997   17341 out.go:177] * Starting control plane node ingress-addon-legacy-20220728145348-12923 in cluster ingress-addon-legacy-20220728145348-12923
	I0728 14:53:49.257916   17341 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 14:53:49.279810   17341 out.go:177] * Pulling base image ...
	I0728 14:53:49.301050   17341 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0728 14:53:49.301116   17341 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 14:53:49.363696   17341 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 14:53:49.363719   17341 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 14:53:49.371065   17341 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0728 14:53:49.371082   17341 cache.go:57] Caching tarball of preloaded images
	I0728 14:53:49.371342   17341 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0728 14:53:49.414502   17341 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0728 14:53:49.479502   17341 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0728 14:53:49.572788   17341 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0728 14:53:54.232311   17341 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0728 14:53:54.232452   17341 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0728 14:53:54.856950   17341 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0728 14:53:54.857213   17341 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/config.json ...
	I0728 14:53:54.857258   17341 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/config.json: {Name:mk344266f53173f3e571f9e55d9e7a8e3b281dea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 14:53:54.857611   17341 cache.go:208] Successfully downloaded all kic artifacts
	I0728 14:53:54.857671   17341 start.go:370] acquiring machines lock for ingress-addon-legacy-20220728145348-12923: {Name:mk7743b2e57949a2e01b6a56776e3783f26e12c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 14:53:54.857781   17341 start.go:374] acquired machines lock for "ingress-addon-legacy-20220728145348-12923" in 97.51µs
	I0728 14:53:54.857822   17341 start.go:92] Provisioning new machine with config: &{Name:ingress-addon-legacy-20220728145348-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220728
145348-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 14:53:54.857869   17341 start.go:132] createHost starting for "" (driver="docker")
	I0728 14:53:54.881268   17341 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0728 14:53:54.881631   17341 start.go:166] libmachine.API.Create for "ingress-addon-legacy-20220728145348-12923" (driver="docker")
	I0728 14:53:54.881694   17341 client.go:168] LocalClient.Create starting
	I0728 14:53:54.881852   17341 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem
	I0728 14:53:54.881935   17341 main.go:134] libmachine: Decoding PEM data...
	I0728 14:53:54.881961   17341 main.go:134] libmachine: Parsing certificate...
	I0728 14:53:54.882071   17341 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem
	I0728 14:53:54.882125   17341 main.go:134] libmachine: Decoding PEM data...
	I0728 14:53:54.882142   17341 main.go:134] libmachine: Parsing certificate...
	I0728 14:53:54.903819   17341 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220728145348-12923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0728 14:53:55.000930   17341 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220728145348-12923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0728 14:53:55.001022   17341 network_create.go:272] running [docker network inspect ingress-addon-legacy-20220728145348-12923] to gather additional debugging logs...
	I0728 14:53:55.001038   17341 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-20220728145348-12923
	W0728 14:53:55.062012   17341 cli_runner.go:211] docker network inspect ingress-addon-legacy-20220728145348-12923 returned with exit code 1
	I0728 14:53:55.062044   17341 network_create.go:275] error running [docker network inspect ingress-addon-legacy-20220728145348-12923]: docker network inspect ingress-addon-legacy-20220728145348-12923: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-20220728145348-12923
	I0728 14:53:55.062076   17341 network_create.go:277] output of [docker network inspect ingress-addon-legacy-20220728145348-12923]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-20220728145348-12923
	
	** /stderr **
	I0728 14:53:55.062177   17341 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0728 14:53:55.123425   17341 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00000e950] misses:0}
	I0728 14:53:55.123471   17341 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 14:53:55.123494   17341 network_create.go:115] attempt to create docker network ingress-addon-legacy-20220728145348-12923 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0728 14:53:55.123583   17341 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-20220728145348-12923 ingress-addon-legacy-20220728145348-12923
	I0728 14:53:55.215706   17341 network_create.go:99] docker network ingress-addon-legacy-20220728145348-12923 192.168.49.0/24 created
	I0728 14:53:55.215739   17341 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-20220728145348-12923" container
	I0728 14:53:55.215834   17341 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0728 14:53:55.276460   17341 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-20220728145348-12923 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220728145348-12923 --label created_by.minikube.sigs.k8s.io=true
	I0728 14:53:55.337827   17341 oci.go:103] Successfully created a docker volume ingress-addon-legacy-20220728145348-12923
	I0728 14:53:55.337947   17341 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-20220728145348-12923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220728145348-12923 --entrypoint /usr/bin/test -v ingress-addon-legacy-20220728145348-12923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0728 14:53:55.782126   17341 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-20220728145348-12923
	I0728 14:53:55.782170   17341 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0728 14:53:55.782185   17341 kic.go:179] Starting extracting preloaded images to volume ...
	I0728 14:53:55.782278   17341 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220728145348-12923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0728 14:54:00.046873   17341 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-20220728145348-12923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (4.264581578s)
	I0728 14:54:00.046897   17341 kic.go:188] duration metric: took 4.264755 seconds to extract preloaded images to volume
	I0728 14:54:00.047017   17341 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0728 14:54:00.176090   17341 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-20220728145348-12923 --name ingress-addon-legacy-20220728145348-12923 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-20220728145348-12923 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-20220728145348-12923 --network ingress-addon-legacy-20220728145348-12923 --ip 192.168.49.2 --volume ingress-addon-legacy-20220728145348-12923:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0728 14:54:00.546808   17341 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220728145348-12923 --format={{.State.Running}}
	I0728 14:54:00.611050   17341 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220728145348-12923 --format={{.State.Status}}
	I0728 14:54:00.677283   17341 cli_runner.go:164] Run: docker exec ingress-addon-legacy-20220728145348-12923 stat /var/lib/dpkg/alternatives/iptables
	I0728 14:54:00.779457   17341 oci.go:144] the created container "ingress-addon-legacy-20220728145348-12923" has a running status.
	I0728 14:54:00.779484   17341 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/ingress-addon-legacy-20220728145348-12923/id_rsa...
	I0728 14:54:00.856736   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/ingress-addon-legacy-20220728145348-12923/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0728 14:54:00.856787   17341 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/ingress-addon-legacy-20220728145348-12923/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0728 14:54:00.965105   17341 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220728145348-12923 --format={{.State.Status}}
	I0728 14:54:01.027953   17341 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0728 14:54:01.027973   17341 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-20220728145348-12923 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0728 14:54:01.140375   17341 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220728145348-12923 --format={{.State.Status}}
	I0728 14:54:01.202616   17341 machine.go:88] provisioning docker machine ...
	I0728 14:54:01.202663   17341 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-20220728145348-12923"
	I0728 14:54:01.202760   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:01.266363   17341 main.go:134] libmachine: Using SSH client type: native
	I0728 14:54:01.266554   17341 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55819 <nil> <nil>}
	I0728 14:54:01.266569   17341 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-20220728145348-12923 && echo "ingress-addon-legacy-20220728145348-12923" | sudo tee /etc/hostname
	I0728 14:54:01.395686   17341 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-20220728145348-12923
	
	I0728 14:54:01.395777   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:01.458770   17341 main.go:134] libmachine: Using SSH client type: native
	I0728 14:54:01.458923   17341 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55819 <nil> <nil>}
	I0728 14:54:01.458952   17341 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-20220728145348-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-20220728145348-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-20220728145348-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 14:54:01.580964   17341 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 14:54:01.580986   17341 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 14:54:01.581014   17341 ubuntu.go:177] setting up certificates
	I0728 14:54:01.581026   17341 provision.go:83] configureAuth start
	I0728 14:54:01.581101   17341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:01.643878   17341 provision.go:138] copyHostCerts
	I0728 14:54:01.643918   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 14:54:01.643971   17341 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 14:54:01.643981   17341 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 14:54:01.644084   17341 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 14:54:01.644248   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 14:54:01.644280   17341 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 14:54:01.644284   17341 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 14:54:01.644340   17341 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 14:54:01.644451   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 14:54:01.644479   17341 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 14:54:01.644484   17341 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 14:54:01.644537   17341 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 14:54:01.644652   17341 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-20220728145348-12923 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-20220728145348-12923]
	I0728 14:54:01.733553   17341 provision.go:172] copyRemoteCerts
	I0728 14:54:01.733608   17341 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 14:54:01.733651   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:01.796923   17341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55819 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/ingress-addon-legacy-20220728145348-12923/id_rsa Username:docker}
	I0728 14:54:01.883288   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 14:54:01.883359   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 14:54:01.899680   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 14:54:01.899764   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1294 bytes)
	I0728 14:54:01.916406   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 14:54:01.916477   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 14:54:01.933016   17341 provision.go:86] duration metric: configureAuth took 351.977797ms
	I0728 14:54:01.933029   17341 ubuntu.go:193] setting minikube options for container-runtime
	I0728 14:54:01.933188   17341 config.go:178] Loaded profile config "ingress-addon-legacy-20220728145348-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0728 14:54:01.933236   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:01.995838   17341 main.go:134] libmachine: Using SSH client type: native
	I0728 14:54:01.996002   17341 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55819 <nil> <nil>}
	I0728 14:54:01.996017   17341 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 14:54:02.117300   17341 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 14:54:02.117319   17341 ubuntu.go:71] root file system type: overlay
	I0728 14:54:02.117455   17341 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 14:54:02.117530   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:02.180881   17341 main.go:134] libmachine: Using SSH client type: native
	I0728 14:54:02.181140   17341 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55819 <nil> <nil>}
	I0728 14:54:02.181188   17341 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 14:54:02.310836   17341 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 14:54:02.310916   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:02.374184   17341 main.go:134] libmachine: Using SSH client type: native
	I0728 14:54:02.374338   17341 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 55819 <nil> <nil>}
	I0728 14:54:02.374350   17341 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 14:54:02.946581   17341 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 21:54:02.318963139 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0728 14:54:02.946614   17341 machine.go:91] provisioned docker machine in 1.74399364s
	I0728 14:54:02.946631   17341 client.go:171] LocalClient.Create took 8.065010395s
	I0728 14:54:02.946650   17341 start.go:174] duration metric: libmachine.API.Create for "ingress-addon-legacy-20220728145348-12923" took 8.065102863s
	I0728 14:54:02.946662   17341 start.go:307] post-start starting for "ingress-addon-legacy-20220728145348-12923" (driver="docker")
	I0728 14:54:02.946672   17341 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 14:54:02.946758   17341 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 14:54:02.946831   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:03.011808   17341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55819 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/ingress-addon-legacy-20220728145348-12923/id_rsa Username:docker}
	I0728 14:54:03.100001   17341 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 14:54:03.104354   17341 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 14:54:03.104371   17341 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 14:54:03.104377   17341 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 14:54:03.104382   17341 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 14:54:03.104394   17341 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 14:54:03.104502   17341 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 14:54:03.104643   17341 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 14:54:03.104650   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> /etc/ssl/certs/129232.pem
	I0728 14:54:03.104797   17341 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 14:54:03.111726   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 14:54:03.127941   17341 start.go:310] post-start completed in 181.266211ms
	I0728 14:54:03.128452   17341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:03.192535   17341 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/config.json ...
	I0728 14:54:03.192941   17341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 14:54:03.192992   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:03.255992   17341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55819 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/ingress-addon-legacy-20220728145348-12923/id_rsa Username:docker}
	I0728 14:54:03.340741   17341 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 14:54:03.344868   17341 start.go:135] duration metric: createHost completed in 8.487077054s
	I0728 14:54:03.344884   17341 start.go:82] releasing machines lock for "ingress-addon-legacy-20220728145348-12923", held for 8.487160521s
	I0728 14:54:03.344956   17341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:03.408869   17341 ssh_runner.go:195] Run: systemctl --version
	I0728 14:54:03.408874   17341 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 14:54:03.408947   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:03.408950   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:03.475652   17341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55819 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/ingress-addon-legacy-20220728145348-12923/id_rsa Username:docker}
	I0728 14:54:03.475654   17341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55819 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/ingress-addon-legacy-20220728145348-12923/id_rsa Username:docker}
	I0728 14:54:03.749810   17341 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 14:54:03.760437   17341 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 14:54:03.760497   17341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 14:54:03.769289   17341 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 14:54:03.781965   17341 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 14:54:03.846702   17341 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 14:54:03.910566   17341 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 14:54:03.967716   17341 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 14:54:04.159005   17341 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 14:54:04.193909   17341 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 14:54:04.272275   17341 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.17 ...
	I0728 14:54:04.272427   17341 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-20220728145348-12923 dig +short host.docker.internal
	I0728 14:54:04.438524   17341 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 14:54:04.438885   17341 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 14:54:04.444319   17341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 14:54:04.453868   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:54:04.517411   17341 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0728 14:54:04.517487   17341 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 14:54:04.546983   17341 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0728 14:54:04.547000   17341 docker.go:542] Images already preloaded, skipping extraction
	I0728 14:54:04.547080   17341 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 14:54:04.575442   17341 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0728 14:54:04.575461   17341 cache_images.go:84] Images are preloaded, skipping loading
	I0728 14:54:04.575552   17341 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 14:54:04.648426   17341 cni.go:95] Creating CNI manager for ""
	I0728 14:54:04.648438   17341 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 14:54:04.648449   17341 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 14:54:04.648463   17341 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-20220728145348-12923 NodeName:ingress-addon-legacy-20220728145348-12923 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:sy
stemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 14:54:04.648580   17341 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-20220728145348-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 14:54:04.648663   17341 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-20220728145348-12923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220728145348-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 14:54:04.648720   17341 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0728 14:54:04.656142   17341 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 14:54:04.656188   17341 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 14:54:04.663852   17341 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I0728 14:54:04.676751   17341 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0728 14:54:04.688871   17341 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2084 bytes)
	I0728 14:54:04.701186   17341 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0728 14:54:04.704910   17341 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 14:54:04.713906   17341 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923 for IP: 192.168.49.2
	I0728 14:54:04.714019   17341 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 14:54:04.714087   17341 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 14:54:04.714135   17341 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/client.key
	I0728 14:54:04.714147   17341 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/client.crt with IP's: []
	I0728 14:54:04.796888   17341 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/client.crt ...
	I0728 14:54:04.796901   17341 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/client.crt: {Name:mk0b59512789ece101927c8bc437548af07a3929 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 14:54:04.797189   17341 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/client.key ...
	I0728 14:54:04.797196   17341 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/client.key: {Name:mk8bd2b2cd338f6ffba724a792241e73f484ab5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 14:54:04.797393   17341 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.key.dd3b5fb2
	I0728 14:54:04.797408   17341 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0728 14:54:04.878178   17341 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.crt.dd3b5fb2 ...
	I0728 14:54:04.878188   17341 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.crt.dd3b5fb2: {Name:mk0713f6f6f67988a4b672a47ea9638c9da3446a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 14:54:04.878410   17341 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.key.dd3b5fb2 ...
	I0728 14:54:04.878418   17341 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.key.dd3b5fb2: {Name:mke6f03d8a151cd3bd6411a4975b6fa2b75abf84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 14:54:04.878589   17341 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.crt
	I0728 14:54:04.878908   17341 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.key
	I0728 14:54:04.879074   17341 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/proxy-client.key
	I0728 14:54:04.879089   17341 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/proxy-client.crt with IP's: []
	I0728 14:54:05.101968   17341 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/proxy-client.crt ...
	I0728 14:54:05.101981   17341 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/proxy-client.crt: {Name:mk31ba54d912a79ce31fe842e3a04681e8779d44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 14:54:05.102263   17341 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/proxy-client.key ...
	I0728 14:54:05.102270   17341 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/proxy-client.key: {Name:mkf0a6b0fbd7f0ec93a6ee15703bd8f57b616608 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 14:54:05.102483   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0728 14:54:05.102507   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0728 14:54:05.102524   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0728 14:54:05.102559   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0728 14:54:05.102576   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 14:54:05.102593   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 14:54:05.102609   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 14:54:05.102624   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 14:54:05.102734   17341 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 14:54:05.102777   17341 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 14:54:05.102804   17341 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 14:54:05.102868   17341 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 14:54:05.102907   17341 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 14:54:05.102954   17341 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 14:54:05.103030   17341 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 14:54:05.103057   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 14:54:05.103075   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem -> /usr/share/ca-certificates/12923.pem
	I0728 14:54:05.103089   17341 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> /usr/share/ca-certificates/129232.pem
	I0728 14:54:05.103512   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 14:54:05.120765   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0728 14:54:05.136784   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 14:54:05.153294   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/ingress-addon-legacy-20220728145348-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 14:54:05.169821   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 14:54:05.186188   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 14:54:05.202577   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 14:54:05.218784   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 14:54:05.235178   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 14:54:05.252408   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 14:54:05.268610   17341 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 14:54:05.285184   17341 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 14:54:05.298053   17341 ssh_runner.go:195] Run: openssl version
	I0728 14:54:05.303043   17341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 14:54:05.310523   17341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 14:54:05.314644   17341 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 14:54:05.314692   17341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 14:54:05.319668   17341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 14:54:05.327095   17341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 14:54:05.334423   17341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 14:54:05.338064   17341 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 14:54:05.338102   17341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 14:54:05.343307   17341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 14:54:05.350585   17341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 14:54:05.357984   17341 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 14:54:05.361789   17341 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 14:54:05.361826   17341 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 14:54:05.366698   17341 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 14:54:05.374093   17341 kubeadm.go:395] StartCluster: {Name:ingress-addon-legacy-20220728145348-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-20220728145348-12923 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 14:54:05.374197   17341 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 14:54:05.402961   17341 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 14:54:05.410206   17341 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 14:54:05.417207   17341 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 14:54:05.417248   17341 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 14:54:05.424105   17341 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 14:54:05.424132   17341 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 14:54:06.145877   17341 out.go:204]   - Generating certificates and keys ...
	I0728 14:54:08.292994   17341 out.go:204]   - Booting up control plane ...
	W0728 14:56:03.210050   17341 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220728145348-12923 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220728145348-12923 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0728 21:54:05.479635     955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 21:54:08.290858     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 21:54:08.291649     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-20220728145348-12923 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-20220728145348-12923 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0728 21:54:05.479635     955 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 21:54:08.290858     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 21:54:08.291649     955 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0728 14:56:03.210100   17341 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0728 14:56:03.630454   17341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 14:56:03.639504   17341 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 14:56:03.639552   17341 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 14:56:03.646566   17341 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 14:56:03.646592   17341 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 14:56:04.345824   17341 out.go:204]   - Generating certificates and keys ...
	I0728 14:56:04.871696   17341 out.go:204]   - Booting up control plane ...
	I0728 14:57:59.789577   17341 kubeadm.go:397] StartCluster complete in 3m54.41784514s
	I0728 14:57:59.789650   17341 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 14:57:59.818000   17341 logs.go:274] 0 containers: []
	W0728 14:57:59.818012   17341 logs.go:276] No container was found matching "kube-apiserver"
	I0728 14:57:59.818072   17341 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 14:57:59.847359   17341 logs.go:274] 0 containers: []
	W0728 14:57:59.847373   17341 logs.go:276] No container was found matching "etcd"
	I0728 14:57:59.847435   17341 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 14:57:59.876602   17341 logs.go:274] 0 containers: []
	W0728 14:57:59.876613   17341 logs.go:276] No container was found matching "coredns"
	I0728 14:57:59.876673   17341 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 14:57:59.904779   17341 logs.go:274] 0 containers: []
	W0728 14:57:59.904791   17341 logs.go:276] No container was found matching "kube-scheduler"
	I0728 14:57:59.904878   17341 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 14:57:59.933230   17341 logs.go:274] 0 containers: []
	W0728 14:57:59.933243   17341 logs.go:276] No container was found matching "kube-proxy"
	I0728 14:57:59.933300   17341 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 14:57:59.962273   17341 logs.go:274] 0 containers: []
	W0728 14:57:59.962285   17341 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 14:57:59.962341   17341 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 14:57:59.990181   17341 logs.go:274] 0 containers: []
	W0728 14:57:59.990200   17341 logs.go:276] No container was found matching "storage-provisioner"
	I0728 14:57:59.990275   17341 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 14:58:00.021133   17341 logs.go:274] 0 containers: []
	W0728 14:58:00.021149   17341 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 14:58:00.021161   17341 logs.go:123] Gathering logs for kubelet ...
	I0728 14:58:00.021171   17341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 14:58:00.063664   17341 logs.go:123] Gathering logs for dmesg ...
	I0728 14:58:00.063690   17341 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 14:58:00.075699   17341 logs.go:123] Gathering logs for describe nodes ...
	I0728 14:58:00.075714   17341 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 14:58:00.127802   17341 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 14:58:00.127817   17341 logs.go:123] Gathering logs for Docker ...
	I0728 14:58:00.127823   17341 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 14:58:00.143267   17341 logs.go:123] Gathering logs for container status ...
	I0728 14:58:00.143281   17341 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 14:58:02.196761   17341 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053482972s)
	W0728 14:58:02.196891   17341 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0728 21:56:03.701428    3426 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 21:56:04.866869    3426 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 21:56:04.867567    3426 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0728 14:58:02.196909   17341 out.go:239] * 
	* 
	W0728 14:58:02.197062   17341 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0728 21:56:03.701428    3426 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 21:56:04.866869    3426 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 21:56:04.867567    3426 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0728 21:56:03.701428    3426 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 21:56:04.866869    3426 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 21:56:04.867567    3426 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 14:58:02.197078   17341 out.go:239] * 
	* 
	W0728 14:58:02.197623   17341 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 14:58:02.260320   17341 out.go:177] 
	W0728 14:58:02.302269   17341 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0728 21:56:03.701428    3426 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 21:56:04.866869    3426 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 21:56:04.867567    3426 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0728 21:56:03.701428    3426 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 21:56:04.866869    3426 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 21:56:04.867567    3426 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 14:58:02.302371   17341 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0728 14:58:02.302417   17341 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0728 14:58:02.323154   17341 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220728145348-12923 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (253.93s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220728145348-12923 addons enable ingress --alsologtostderr -v=5
E0728 14:58:18.885185   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 14:58:59.845355   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220728145348-12923 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.108972827s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 14:58:02.463628   17688 out.go:296] Setting OutFile to fd 1 ...
	I0728 14:58:02.464525   17688 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:58:02.464533   17688 out.go:309] Setting ErrFile to fd 2...
	I0728 14:58:02.464542   17688 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:58:02.464651   17688 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 14:58:02.486338   17688 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0728 14:58:02.508638   17688 config.go:178] Loaded profile config "ingress-addon-legacy-20220728145348-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0728 14:58:02.508668   17688 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-20220728145348-12923"
	I0728 14:58:02.508685   17688 addons.go:153] Setting addon ingress=true in "ingress-addon-legacy-20220728145348-12923"
	I0728 14:58:02.509332   17688 host.go:66] Checking if "ingress-addon-legacy-20220728145348-12923" exists ...
	I0728 14:58:02.510227   17688 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220728145348-12923 --format={{.State.Status}}
	I0728 14:58:02.595654   17688 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0728 14:58:02.616488   17688 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0728 14:58:02.637383   17688 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0728 14:58:02.658404   17688 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0728 14:58:02.679571   17688 addons.go:345] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0728 14:58:02.679590   17688 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I0728 14:58:02.679667   17688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:58:02.742369   17688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55819 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/ingress-addon-legacy-20220728145348-12923/id_rsa Username:docker}
	I0728 14:58:02.835328   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:58:02.884236   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:02.884258   17688 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:03.162725   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:58:03.214845   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:03.214860   17688 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:03.755992   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:58:03.807315   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:03.807330   17688 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:04.463591   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:58:04.515001   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:04.515016   17688 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:05.307069   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:58:05.357489   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:05.357507   17688 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:06.529293   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:58:06.581984   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:06.582000   17688 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:08.837424   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:58:08.888789   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:08.888807   17688 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:10.499725   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:58:10.554896   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:10.554911   17688 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:13.360500   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:58:13.410988   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:13.411008   17688 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:17.236020   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:58:17.288433   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:17.288447   17688 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:24.987662   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:58:25.040833   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:25.040848   17688 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:39.677221   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:58:39.728845   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:58:39.728859   17688 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:08.135524   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:59:08.187257   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:08.187271   17688 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:31.357695   17688 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0728 14:59:31.408261   17688 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:31.408296   17688 addons.go:383] Verifying addon ingress=true in "ingress-addon-legacy-20220728145348-12923"
	I0728 14:59:31.429852   17688 out.go:177] * Verifying ingress addon...
	I0728 14:59:31.451484   17688 out.go:177] 
	W0728 14:59:31.473683   17688 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220728145348-12923" does not exist: client config: context "ingress-addon-legacy-20220728145348-12923" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-20220728145348-12923" does not exist: client config: context "ingress-addon-legacy-20220728145348-12923" does not exist]
	W0728 14:59:31.473708   17688 out.go:239] * 
	* 
	W0728 14:59:31.476838   17688 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 14:59:31.497910   17688 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220728145348-12923
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220728145348-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f",
	        "Created": "2022-07-28T21:54:00.251760548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 41534,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T21:54:00.547261807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f/hostname",
	        "HostsPath": "/var/lib/docker/containers/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f/hosts",
	        "LogPath": "/var/lib/docker/containers/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f-json.log",
	        "Name": "/ingress-addon-legacy-20220728145348-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220728145348-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220728145348-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/91607a8fe8a6a834a4d0805fa6ac72f4d5df1c99785bd994484d6e7423654c0d-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/91607a8fe8a6a834a4d0805fa6ac72f4d5df1c99785bd994484d6e7423654c0d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/91607a8fe8a6a834a4d0805fa6ac72f4d5df1c99785bd994484d6e7423654c0d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/91607a8fe8a6a834a4d0805fa6ac72f4d5df1c99785bd994484d6e7423654c0d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220728145348-12923",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220728145348-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220728145348-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220728145348-12923",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220728145348-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7653accb6d85a06e61b049920f4dbd6643211c565efea87001af4fc98fd1032b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55819"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55820"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55821"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55822"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55823"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7653accb6d85",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220728145348-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "98dd0bb2df8b",
	                        "ingress-addon-legacy-20220728145348-12923"
	                    ],
	                    "NetworkID": "413056ff2edaf4abf4d75a0d1428c63de75ddd913a68de5783d3ee3afaa13cb1",
	                    "EndpointID": "3341382737d5c6f0e80a6844ba158414f152c1c8abd7544d6a959af02a75c786",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220728145348-12923 -n ingress-addon-legacy-20220728145348-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220728145348-12923 -n ingress-addon-legacy-20220728145348-12923: exit status 6 (413.318286ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 14:59:31.991274   17795 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220728145348-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220728145348-12923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220728145348-12923 addons enable ingress-dns --alsologtostderr -v=5
E0728 15:00:21.764916   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220728145348-12923 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.054169638s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 14:59:32.049767   17805 out.go:296] Setting OutFile to fd 1 ...
	I0728 14:59:32.050464   17805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:59:32.050470   17805 out.go:309] Setting ErrFile to fd 2...
	I0728 14:59:32.050474   17805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:59:32.050579   17805 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 14:59:32.072040   17805 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0728 14:59:32.093922   17805 config.go:178] Loaded profile config "ingress-addon-legacy-20220728145348-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0728 14:59:32.093953   17805 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-20220728145348-12923"
	I0728 14:59:32.093970   17805 addons.go:153] Setting addon ingress-dns=true in "ingress-addon-legacy-20220728145348-12923"
	I0728 14:59:32.094474   17805 host.go:66] Checking if "ingress-addon-legacy-20220728145348-12923" exists ...
	I0728 14:59:32.095377   17805 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-20220728145348-12923 --format={{.State.Status}}
	I0728 14:59:32.180865   17805 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0728 14:59:32.201590   17805 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0728 14:59:32.222713   17805 addons.go:345] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0728 14:59:32.222735   17805 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0728 14:59:32.222817   17805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-20220728145348-12923
	I0728 14:59:32.287375   17805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55819 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/ingress-addon-legacy-20220728145348-12923/id_rsa Username:docker}
	I0728 14:59:32.383191   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 14:59:32.431073   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:32.431098   17805 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:32.709567   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 14:59:32.762433   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:32.762450   17805 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:33.302952   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 14:59:33.354778   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:33.354795   17805 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:34.010567   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 14:59:34.064006   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:34.064026   17805 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:34.855792   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 14:59:34.907820   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:34.907837   17805 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:36.080420   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 14:59:36.130647   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:36.130666   17805 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:38.383982   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 14:59:38.434669   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:38.434690   17805 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:40.045596   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 14:59:40.097369   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:40.097384   17805 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:42.904031   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 14:59:42.958152   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:42.958167   17805 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:46.783364   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 14:59:46.835759   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:46.835778   17805 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:54.534205   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 14:59:54.586795   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 14:59:54.586812   17805 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 15:00:09.224581   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 15:00:09.276182   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 15:00:09.276198   17805 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 15:00:37.684914   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 15:00:37.738545   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 15:00:37.738560   17805 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 15:01:00.908947   17805 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0728 15:01:00.962637   17805 addons.go:366] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0728 15:01:00.984621   17805 out.go:177] 
	W0728 15:01:01.006175   17805 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0728 15:01:01.006201   17805 out.go:239] * 
	* 
	W0728 15:01:01.010107   17805 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 15:01:01.031108   17805 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220728145348-12923
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220728145348-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f",
	        "Created": "2022-07-28T21:54:00.251760548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 41534,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T21:54:00.547261807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f/hostname",
	        "HostsPath": "/var/lib/docker/containers/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f/hosts",
	        "LogPath": "/var/lib/docker/containers/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f-json.log",
	        "Name": "/ingress-addon-legacy-20220728145348-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220728145348-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220728145348-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/91607a8fe8a6a834a4d0805fa6ac72f4d5df1c99785bd994484d6e7423654c0d-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/91607a8fe8a6a834a4d0805fa6ac72f4d5df1c99785bd994484d6e7423654c0d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/91607a8fe8a6a834a4d0805fa6ac72f4d5df1c99785bd994484d6e7423654c0d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/91607a8fe8a6a834a4d0805fa6ac72f4d5df1c99785bd994484d6e7423654c0d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220728145348-12923",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220728145348-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220728145348-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220728145348-12923",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220728145348-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7653accb6d85a06e61b049920f4dbd6643211c565efea87001af4fc98fd1032b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55819"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55820"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55821"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55822"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55823"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7653accb6d85",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220728145348-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "98dd0bb2df8b",
	                        "ingress-addon-legacy-20220728145348-12923"
	                    ],
	                    "NetworkID": "413056ff2edaf4abf4d75a0d1428c63de75ddd913a68de5783d3ee3afaa13cb1",
	                    "EndpointID": "3341382737d5c6f0e80a6844ba158414f152c1c8abd7544d6a959af02a75c786",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220728145348-12923 -n ingress-addon-legacy-20220728145348-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220728145348-12923 -n ingress-addon-legacy-20220728145348-12923: exit status 6 (412.478029ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:01:01.523787   17921 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220728145348-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220728145348-12923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:158: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-20220728145348-12923
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-20220728145348-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f",
	        "Created": "2022-07-28T21:54:00.251760548Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 41534,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T21:54:00.547261807Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f/hostname",
	        "HostsPath": "/var/lib/docker/containers/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f/hosts",
	        "LogPath": "/var/lib/docker/containers/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f/98dd0bb2df8b9cb103e3a9bb445fb190fd633ee4f7cbc21cd6745a28cbdc492f-json.log",
	        "Name": "/ingress-addon-legacy-20220728145348-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-20220728145348-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-20220728145348-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/91607a8fe8a6a834a4d0805fa6ac72f4d5df1c99785bd994484d6e7423654c0d-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/91607a8fe8a6a834a4d0805fa6ac72f4d5df1c99785bd994484d6e7423654c0d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/91607a8fe8a6a834a4d0805fa6ac72f4d5df1c99785bd994484d6e7423654c0d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/91607a8fe8a6a834a4d0805fa6ac72f4d5df1c99785bd994484d6e7423654c0d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-20220728145348-12923",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-20220728145348-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-20220728145348-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-20220728145348-12923",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-20220728145348-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7653accb6d85a06e61b049920f4dbd6643211c565efea87001af4fc98fd1032b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55819"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55820"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55821"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55822"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "55823"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7653accb6d85",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-20220728145348-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "98dd0bb2df8b",
	                        "ingress-addon-legacy-20220728145348-12923"
	                    ],
	                    "NetworkID": "413056ff2edaf4abf4d75a0d1428c63de75ddd913a68de5783d3ee3afaa13cb1",
	                    "EndpointID": "3341382737d5c6f0e80a6844ba158414f152c1c8abd7544d6a959af02a75c786",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220728145348-12923 -n ingress-addon-legacy-20220728145348-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-20220728145348-12923 -n ingress-addon-legacy-20220728145348-12923: exit status 6 (413.635456ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:01:02.006414   17933 status.go:413] kubeconfig endpoint: extract IP: "ingress-addon-legacy-20220728145348-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-20220728145348-12923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (242.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220728150610-12923
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220728150610-12923
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220728150610-12923: (36.712389385s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220728150610-12923 --wait=true -v=8 --alsologtostderr
E0728 15:12:13.982887   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 15:12:37.936263   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
multinode_test.go:293: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220728150610-12923 --wait=true -v=8 --alsologtostderr: exit status 80 (3m19.24275157s)

                                                
                                                
-- stdout --
	* [multinode-20220728150610-12923] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-20220728150610-12923 in cluster multinode-20220728150610-12923
	* Pulling base image ...
	* Restarting existing docker container for "multinode-20220728150610-12923" ...
	* Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Starting worker node multinode-20220728150610-12923-m02 in cluster multinode-20220728150610-12923
	* Pulling base image ...
	* Restarting existing docker container for "multinode-20220728150610-12923-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.58.2
	* Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	  - env NO_PROXY=192.168.58.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 15:10:15.811246   20887 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:10:15.811450   20887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:10:15.811455   20887 out.go:309] Setting ErrFile to fd 2...
	I0728 15:10:15.811459   20887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:10:15.811575   20887 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:10:15.812090   20887 out.go:303] Setting JSON to false
	I0728 15:10:15.827445   20887 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7257,"bootTime":1659038958,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:10:15.827547   20887 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:10:15.853819   20887 out.go:177] * [multinode-20220728150610-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:10:15.896697   20887 notify.go:193] Checking for updates...
	I0728 15:10:15.918713   20887 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:10:15.939773   20887 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:10:15.961874   20887 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:10:15.984659   20887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:10:16.005855   20887 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:10:16.028562   20887 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:10:16.028642   20887 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:10:16.097020   20887 docker.go:137] docker version: linux-20.10.17
	I0728 15:10:16.097237   20887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:10:16.226272   20887 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-28 22:10:16.17124985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:10:16.247923   20887 out.go:177] * Using the docker driver based on existing profile
	I0728 15:10:16.269365   20887 start.go:284] selected driver: docker
	I0728 15:10:16.269390   20887 start.go:808] validating driver "docker" against &{Name:multinode-20220728150610-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728150610-12923 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevir
t:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:10:16.269537   20887 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:10:16.269712   20887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:10:16.400959   20887 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-28 22:10:16.34597693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:10:16.403106   20887 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:10:16.403132   20887 cni.go:95] Creating CNI manager for ""
	I0728 15:10:16.403140   20887 cni.go:156] 3 nodes found, recommending kindnet
	I0728 15:10:16.403156   20887 start_flags.go:310] config:
	{Name:multinode-20220728150610-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728150610-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:10:16.425089   20887 out.go:177] * Starting control plane node multinode-20220728150610-12923 in cluster multinode-20220728150610-12923
	I0728 15:10:16.446841   20887 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:10:16.468735   20887 out.go:177] * Pulling base image ...
	I0728 15:10:16.511990   20887 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:10:16.512049   20887 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:10:16.512064   20887 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 15:10:16.512083   20887 cache.go:57] Caching tarball of preloaded images
	I0728 15:10:16.512285   20887 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:10:16.512307   20887 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 15:10:16.513316   20887 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/config.json ...
	I0728 15:10:16.576491   20887 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:10:16.576506   20887 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:10:16.576516   20887 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:10:16.576559   20887 start.go:370] acquiring machines lock for multinode-20220728150610-12923: {Name:mkd79d301f4101af8f61f3073fc793d92d8ea4af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:10:16.576641   20887 start.go:374] acquired machines lock for "multinode-20220728150610-12923" in 57.563µs
	I0728 15:10:16.576662   20887 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:10:16.576670   20887 fix.go:55] fixHost starting: 
	I0728 15:10:16.576905   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923 --format={{.State.Status}}
	I0728 15:10:16.639763   20887 fix.go:103] recreateIfNeeded on multinode-20220728150610-12923: state=Stopped err=<nil>
	W0728 15:10:16.639795   20887 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:10:16.683570   20887 out.go:177] * Restarting existing docker container for "multinode-20220728150610-12923" ...
	I0728 15:10:16.705605   20887 cli_runner.go:164] Run: docker start multinode-20220728150610-12923
	I0728 15:10:17.034676   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923 --format={{.State.Status}}
	I0728 15:10:17.098124   20887 kic.go:415] container "multinode-20220728150610-12923" state is running.
	I0728 15:10:17.098689   20887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923
	I0728 15:10:17.165300   20887 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/config.json ...
	I0728 15:10:17.165708   20887 machine.go:88] provisioning docker machine ...
	I0728 15:10:17.165733   20887 ubuntu.go:169] provisioning hostname "multinode-20220728150610-12923"
	I0728 15:10:17.165806   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:17.233380   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:10:17.233572   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0728 15:10:17.233586   20887 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220728150610-12923 && echo "multinode-20220728150610-12923" | sudo tee /etc/hostname
	I0728 15:10:17.363601   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220728150610-12923
	
	I0728 15:10:17.363692   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:17.428359   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:10:17.428540   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0728 15:10:17.428555   20887 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220728150610-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220728150610-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220728150610-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:10:17.546675   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:10:17.546695   20887 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:10:17.546717   20887 ubuntu.go:177] setting up certificates
	I0728 15:10:17.546732   20887 provision.go:83] configureAuth start
	I0728 15:10:17.546793   20887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923
	I0728 15:10:17.611863   20887 provision.go:138] copyHostCerts
	I0728 15:10:17.611917   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:10:17.611971   20887 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:10:17.611981   20887 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:10:17.612084   20887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:10:17.612268   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:10:17.612299   20887 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:10:17.612304   20887 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:10:17.612362   20887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:10:17.612479   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:10:17.612510   20887 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:10:17.612515   20887 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:10:17.612576   20887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:10:17.612690   20887 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.multinode-20220728150610-12923 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220728150610-12923]
	I0728 15:10:17.751768   20887 provision.go:172] copyRemoteCerts
	I0728 15:10:17.751838   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:10:17.751884   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:17.816422   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:17.905658   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 15:10:17.905732   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:10:17.922453   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 15:10:17.922514   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0728 15:10:17.938599   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 15:10:17.938669   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 15:10:17.954750   20887 provision.go:86] duration metric: configureAuth took 407.997233ms
	I0728 15:10:17.954762   20887 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:10:17.954913   20887 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:10:17.954963   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.017232   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:10:18.017390   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0728 15:10:18.017401   20887 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:10:18.137197   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:10:18.137228   20887 ubuntu.go:71] root file system type: overlay
	I0728 15:10:18.137404   20887 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:10:18.137486   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.201417   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:10:18.201580   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0728 15:10:18.201627   20887 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:10:18.331185   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:10:18.331301   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.393810   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:10:18.394025   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0728 15:10:18.394044   20887 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:10:18.519288   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:10:18.519309   20887 machine.go:91] provisioned docker machine in 1.353576087s
	I0728 15:10:18.519318   20887 start.go:307] post-start starting for "multinode-20220728150610-12923" (driver="docker")
	I0728 15:10:18.519323   20887 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:10:18.519394   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:10:18.519440   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.583344   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:18.671962   20887 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:10:18.675232   20887 command_runner.go:130] > NAME="Ubuntu"
	I0728 15:10:18.675241   20887 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0728 15:10:18.675244   20887 command_runner.go:130] > ID=ubuntu
	I0728 15:10:18.675250   20887 command_runner.go:130] > ID_LIKE=debian
	I0728 15:10:18.675255   20887 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0728 15:10:18.675258   20887 command_runner.go:130] > VERSION_ID="20.04"
	I0728 15:10:18.675262   20887 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0728 15:10:18.675267   20887 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0728 15:10:18.675271   20887 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0728 15:10:18.675278   20887 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0728 15:10:18.675282   20887 command_runner.go:130] > VERSION_CODENAME=focal
	I0728 15:10:18.675285   20887 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0728 15:10:18.675436   20887 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:10:18.675453   20887 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:10:18.675460   20887 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:10:18.675464   20887 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:10:18.675475   20887 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:10:18.675583   20887 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:10:18.675715   20887 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:10:18.675721   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> /etc/ssl/certs/129232.pem
	I0728 15:10:18.675863   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:10:18.683199   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:10:18.700539   20887 start.go:310] post-start completed in 181.206774ms
	I0728 15:10:18.700610   20887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:10:18.700669   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.764313   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:18.848660   20887 command_runner.go:130] > 12%
	I0728 15:10:18.848725   20887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:10:18.852620   20887 command_runner.go:130] > 49G
	I0728 15:10:18.852940   20887 fix.go:57] fixHost completed within 2.276240782s
	I0728 15:10:18.852949   20887 start.go:82] releasing machines lock for "multinode-20220728150610-12923", held for 2.276272026s
	I0728 15:10:18.853022   20887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923
	I0728 15:10:18.916111   20887 ssh_runner.go:195] Run: systemctl --version
	I0728 15:10:18.916116   20887 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:10:18.916188   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.916171   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.981606   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:18.981926   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:19.272350   20887 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0728 15:10:19.272368   20887 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0728 15:10:19.272376   20887 command_runner.go:130] > <H1>302 Moved</H1>
	I0728 15:10:19.272383   20887 command_runner.go:130] > The document has moved
	I0728 15:10:19.272401   20887 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0728 15:10:19.272409   20887 command_runner.go:130] > </BODY></HTML>
	I0728 15:10:19.273845   20887 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.17)
	I0728 15:10:19.273860   20887 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0728 15:10:19.273991   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 15:10:19.281044   20887 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0728 15:10:19.293233   20887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:10:19.360088   20887 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0728 15:10:19.446098   20887 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:10:19.455279   20887 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0728 15:10:19.455594   20887 command_runner.go:130] > [Unit]
	I0728 15:10:19.455603   20887 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 15:10:19.455608   20887 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 15:10:19.455611   20887 command_runner.go:130] > BindsTo=containerd.service
	I0728 15:10:19.455615   20887 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0728 15:10:19.455619   20887 command_runner.go:130] > Wants=network-online.target
	I0728 15:10:19.455624   20887 command_runner.go:130] > Requires=docker.socket
	I0728 15:10:19.455631   20887 command_runner.go:130] > StartLimitBurst=3
	I0728 15:10:19.455636   20887 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 15:10:19.455642   20887 command_runner.go:130] > [Service]
	I0728 15:10:19.455647   20887 command_runner.go:130] > Type=notify
	I0728 15:10:19.455652   20887 command_runner.go:130] > Restart=on-failure
	I0728 15:10:19.455659   20887 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 15:10:19.455671   20887 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 15:10:19.455677   20887 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 15:10:19.455682   20887 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 15:10:19.455689   20887 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 15:10:19.455694   20887 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 15:10:19.455714   20887 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 15:10:19.455727   20887 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 15:10:19.455734   20887 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 15:10:19.455737   20887 command_runner.go:130] > ExecStart=
	I0728 15:10:19.455750   20887 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0728 15:10:19.455754   20887 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 15:10:19.455760   20887 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 15:10:19.455765   20887 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 15:10:19.455769   20887 command_runner.go:130] > LimitNOFILE=infinity
	I0728 15:10:19.455772   20887 command_runner.go:130] > LimitNPROC=infinity
	I0728 15:10:19.455775   20887 command_runner.go:130] > LimitCORE=infinity
	I0728 15:10:19.455780   20887 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 15:10:19.455784   20887 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 15:10:19.455788   20887 command_runner.go:130] > TasksMax=infinity
	I0728 15:10:19.455804   20887 command_runner.go:130] > TimeoutStartSec=0
	I0728 15:10:19.455812   20887 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 15:10:19.455817   20887 command_runner.go:130] > Delegate=yes
	I0728 15:10:19.455833   20887 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 15:10:19.455842   20887 command_runner.go:130] > KillMode=process
	I0728 15:10:19.455847   20887 command_runner.go:130] > [Install]
	I0728 15:10:19.455851   20887 command_runner.go:130] > WantedBy=multi-user.target
	I0728 15:10:19.456420   20887 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:10:19.456473   20887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:10:19.466119   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:10:19.477913   20887 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 15:10:19.477925   20887 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 15:10:19.478666   20887 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:10:19.544989   20887 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:10:19.614701   20887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:10:19.680016   20887 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:10:19.910393   20887 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 15:10:19.979484   20887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:10:20.046781   20887 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 15:10:20.056062   20887 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 15:10:20.056123   20887 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 15:10:20.059866   20887 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0728 15:10:20.059877   20887 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0728 15:10:20.059883   20887 command_runner.go:130] > Device: 96h/150d	Inode: 113         Links: 1
	I0728 15:10:20.059892   20887 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0728 15:10:20.059899   20887 command_runner.go:130] > Access: 2022-07-28 22:10:19.369180917 +0000
	I0728 15:10:20.059906   20887 command_runner.go:130] > Modify: 2022-07-28 22:10:19.369180917 +0000
	I0728 15:10:20.059911   20887 command_runner.go:130] > Change: 2022-07-28 22:10:19.377180917 +0000
	I0728 15:10:20.059914   20887 command_runner.go:130] >  Birth: -
	I0728 15:10:20.060070   20887 start.go:471] Will wait 60s for crictl version
	I0728 15:10:20.060109   20887 ssh_runner.go:195] Run: sudo crictl version
	I0728 15:10:20.085604   20887 command_runner.go:130] > Version:  0.1.0
	I0728 15:10:20.085615   20887 command_runner.go:130] > RuntimeName:  docker
	I0728 15:10:20.085619   20887 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0728 15:10:20.085624   20887 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0728 15:10:20.087675   20887 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 15:10:20.087754   20887 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:10:20.118335   20887 command_runner.go:130] > 20.10.17
	I0728 15:10:20.121465   20887 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:10:20.153527   20887 command_runner.go:130] > 20.10.17
	I0728 15:10:20.198946   20887 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 15:10:20.199157   20887 cli_runner.go:164] Run: docker exec -t multinode-20220728150610-12923 dig +short host.docker.internal
	I0728 15:10:20.319120   20887 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:10:20.319232   20887 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:10:20.323265   20887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:10:20.332421   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:20.395207   20887 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:10:20.395280   20887 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:10:20.422104   20887 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.24.3
	I0728 15:10:20.422121   20887 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.24.3
	I0728 15:10:20.422127   20887 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.24.3
	I0728 15:10:20.422141   20887 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.24.3
	I0728 15:10:20.422146   20887 command_runner.go:130] > kindest/kindnetd:v20220510-4929dd75
	I0728 15:10:20.422151   20887 command_runner.go:130] > k8s.gcr.io/etcd:3.5.3-0
	I0728 15:10:20.422155   20887 command_runner.go:130] > k8s.gcr.io/pause:3.7
	I0728 15:10:20.422159   20887 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0728 15:10:20.422163   20887 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0728 15:10:20.422167   20887 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:10:20.422170   20887 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0728 15:10:20.424699   20887 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	kindest/kindnetd:v20220510-4929dd75
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0728 15:10:20.424714   20887 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:10:20.424785   20887 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:10:20.451355   20887 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.24.3
	I0728 15:10:20.451367   20887 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.24.3
	I0728 15:10:20.451372   20887 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.24.3
	I0728 15:10:20.451376   20887 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.24.3
	I0728 15:10:20.451385   20887 command_runner.go:130] > kindest/kindnetd:v20220510-4929dd75
	I0728 15:10:20.451390   20887 command_runner.go:130] > k8s.gcr.io/etcd:3.5.3-0
	I0728 15:10:20.451394   20887 command_runner.go:130] > k8s.gcr.io/pause:3.7
	I0728 15:10:20.451402   20887 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0728 15:10:20.451408   20887 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0728 15:10:20.451417   20887 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:10:20.451428   20887 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0728 15:10:20.454268   20887 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	kindest/kindnetd:v20220510-4929dd75
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0728 15:10:20.454291   20887 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:10:20.454368   20887 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:10:20.522620   20887 command_runner.go:130] > systemd
	I0728 15:10:20.525691   20887 cni.go:95] Creating CNI manager for ""
	I0728 15:10:20.525702   20887 cni.go:156] 3 nodes found, recommending kindnet
	I0728 15:10:20.525717   20887 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:10:20.525735   20887 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220728150610-12923 NodeName:multinode-20220728150610-12923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:10:20.525865   20887 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220728150610-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:10:20.525956   20887 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220728150610-12923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728150610-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:10:20.526025   20887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 15:10:20.532903   20887 command_runner.go:130] > kubeadm
	I0728 15:10:20.532912   20887 command_runner.go:130] > kubectl
	I0728 15:10:20.532920   20887 command_runner.go:130] > kubelet
	I0728 15:10:20.533757   20887 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:10:20.533802   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:10:20.540651   20887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (492 bytes)
	I0728 15:10:20.552624   20887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:10:20.564863   20887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0728 15:10:20.577323   20887 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:10:20.581045   20887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:10:20.590597   20887 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923 for IP: 192.168.58.2
	I0728 15:10:20.590705   20887 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:10:20.590756   20887 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:10:20.590840   20887 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/client.key
	I0728 15:10:20.590898   20887 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/apiserver.key.cee25041
	I0728 15:10:20.590943   20887 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/proxy-client.key
	I0728 15:10:20.590949   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0728 15:10:20.591003   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0728 15:10:20.591031   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0728 15:10:20.591055   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0728 15:10:20.591073   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 15:10:20.591088   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 15:10:20.591104   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 15:10:20.591119   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 15:10:20.591230   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:10:20.591271   20887 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:10:20.591283   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:10:20.591317   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:10:20.591351   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:10:20.591381   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:10:20.591446   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:10:20.591480   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem -> /usr/share/ca-certificates/12923.pem
	I0728 15:10:20.591498   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> /usr/share/ca-certificates/129232.pem
	I0728 15:10:20.591515   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:10:20.592036   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:10:20.608345   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0728 15:10:20.624811   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:10:20.641977   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 15:10:20.658508   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:10:20.674687   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:10:20.691379   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:10:20.707949   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:10:20.724111   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:10:20.740179   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:10:20.756547   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:10:20.772557   20887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:10:20.784846   20887 ssh_runner.go:195] Run: openssl version
	I0728 15:10:20.789698   20887 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0728 15:10:20.790002   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:10:20.797570   20887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:10:20.801274   20887 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:10:20.801291   20887 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:10:20.801327   20887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:10:20.805951   20887 command_runner.go:130] > 51391683
	I0728 15:10:20.806188   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:10:20.813288   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:10:20.844826   20887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:10:20.848327   20887 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:10:20.848495   20887 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:10:20.848546   20887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:10:20.853229   20887 command_runner.go:130] > 3ec20f2e
	I0728 15:10:20.853528   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:10:20.860746   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:10:20.868316   20887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:10:20.871931   20887 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:10:20.872084   20887 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:10:20.872141   20887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:10:20.876832   20887 command_runner.go:130] > b5213941
	I0728 15:10:20.877039   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:10:20.883882   20887 kubeadm.go:395] StartCluster: {Name:multinode-20220728150610-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728150610-12923 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logvie
wer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:10:20.883989   20887 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:10:20.912948   20887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:10:20.919832   20887 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0728 15:10:20.919844   20887 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0728 15:10:20.919849   20887 command_runner.go:130] > /var/lib/minikube/etcd:
	I0728 15:10:20.919867   20887 command_runner.go:130] > member
	I0728 15:10:20.920435   20887 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 15:10:20.920449   20887 kubeadm.go:626] restartCluster start
	I0728 15:10:20.920496   20887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 15:10:20.927238   20887 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:20.927291   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:20.990032   20887 kubeconfig.go:116] verify returned: extract IP: "multinode-20220728150610-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:10:20.990114   20887 kubeconfig.go:127] "multinode-20220728150610-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 15:10:20.990343   20887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:10:20.990859   20887 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:10:20.991069   20887 kapi.go:59] client config for multinode-20220728150610-12923: &rest.Config{Host:"https://127.0.0.1:56607", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-2022072815061
0-12923/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:10:20.991384   20887 cert_rotation.go:137] Starting client certificate rotation controller
	I0728 15:10:20.991528   20887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 15:10:20.999043   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:20.999102   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:21.006905   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:21.207372   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:21.207500   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:21.217954   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:21.407555   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:21.407671   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:21.419369   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:21.608154   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:21.608389   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:21.618943   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:21.809037   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:21.809252   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:21.819597   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:22.007026   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:22.007216   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:22.017034   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:22.209053   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:22.209191   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:22.219642   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:22.407458   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:22.407668   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:22.418267   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:22.609156   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:22.609335   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:22.619763   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:22.808073   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:22.808167   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:22.818864   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:23.009059   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:23.009211   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:23.019328   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:23.207047   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:23.207253   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:23.217269   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:23.407360   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:23.407458   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:23.417405   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:23.607117   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:23.607247   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:23.616680   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:23.809085   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:23.809251   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:23.819674   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:24.009110   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:24.009248   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:24.019592   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:24.019601   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:24.019654   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:24.027532   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:24.027545   20887 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 15:10:24.027551   20887 kubeadm.go:1092] stopping kube-system containers ...
	I0728 15:10:24.027610   20887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:10:24.054242   20887 command_runner.go:130] > 89466f3f8306
	I0728 15:10:24.054253   20887 command_runner.go:130] > 765d6b79e654
	I0728 15:10:24.054257   20887 command_runner.go:130] > ece8e7f7eb66
	I0728 15:10:24.054260   20887 command_runner.go:130] > 50a595b77903
	I0728 15:10:24.054263   20887 command_runner.go:130] > 7b9caab60a97
	I0728 15:10:24.054266   20887 command_runner.go:130] > 0d0894f41f2a
	I0728 15:10:24.054269   20887 command_runner.go:130] > 848acc25a7d7
	I0728 15:10:24.054274   20887 command_runner.go:130] > 4a96e7ffb1b4
	I0728 15:10:24.054279   20887 command_runner.go:130] > 8e2030fdbc79
	I0728 15:10:24.054290   20887 command_runner.go:130] > abab41f9a904
	I0728 15:10:24.054294   20887 command_runner.go:130] > 9db2ba48d7a6
	I0728 15:10:24.054297   20887 command_runner.go:130] > 06994bc702bb
	I0728 15:10:24.054300   20887 command_runner.go:130] > 3641ce6d4a53
	I0728 15:10:24.054304   20887 command_runner.go:130] > bb142f1efac9
	I0728 15:10:24.054306   20887 command_runner.go:130] > 21e11a020b83
	I0728 15:10:24.054311   20887 command_runner.go:130] > e71a37402f1e
	I0728 15:10:24.057234   20887 docker.go:443] Stopping containers: [89466f3f8306 765d6b79e654 ece8e7f7eb66 50a595b77903 7b9caab60a97 0d0894f41f2a 848acc25a7d7 4a96e7ffb1b4 8e2030fdbc79 abab41f9a904 9db2ba48d7a6 06994bc702bb 3641ce6d4a53 bb142f1efac9 21e11a020b83 e71a37402f1e]
	I0728 15:10:24.057306   20887 ssh_runner.go:195] Run: docker stop 89466f3f8306 765d6b79e654 ece8e7f7eb66 50a595b77903 7b9caab60a97 0d0894f41f2a 848acc25a7d7 4a96e7ffb1b4 8e2030fdbc79 abab41f9a904 9db2ba48d7a6 06994bc702bb 3641ce6d4a53 bb142f1efac9 21e11a020b83 e71a37402f1e
	I0728 15:10:24.087008   20887 command_runner.go:130] > 89466f3f8306
	I0728 15:10:24.087022   20887 command_runner.go:130] > 765d6b79e654
	I0728 15:10:24.087026   20887 command_runner.go:130] > ece8e7f7eb66
	I0728 15:10:24.087029   20887 command_runner.go:130] > 50a595b77903
	I0728 15:10:24.087032   20887 command_runner.go:130] > 7b9caab60a97
	I0728 15:10:24.087038   20887 command_runner.go:130] > 0d0894f41f2a
	I0728 15:10:24.087042   20887 command_runner.go:130] > 848acc25a7d7
	I0728 15:10:24.087046   20887 command_runner.go:130] > 4a96e7ffb1b4
	I0728 15:10:24.087049   20887 command_runner.go:130] > 8e2030fdbc79
	I0728 15:10:24.087052   20887 command_runner.go:130] > abab41f9a904
	I0728 15:10:24.087056   20887 command_runner.go:130] > 9db2ba48d7a6
	I0728 15:10:24.087059   20887 command_runner.go:130] > 06994bc702bb
	I0728 15:10:24.087063   20887 command_runner.go:130] > 3641ce6d4a53
	I0728 15:10:24.087066   20887 command_runner.go:130] > bb142f1efac9
	I0728 15:10:24.087069   20887 command_runner.go:130] > 21e11a020b83
	I0728 15:10:24.087073   20887 command_runner.go:130] > e71a37402f1e
	I0728 15:10:24.087126   20887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 15:10:24.097123   20887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:10:24.103794   20887 command_runner.go:130] > -rw------- 1 root root 5643 Jul 28 22:06 /etc/kubernetes/admin.conf
	I0728 15:10:24.103819   20887 command_runner.go:130] > -rw------- 1 root root 5656 Jul 28 22:06 /etc/kubernetes/controller-manager.conf
	I0728 15:10:24.103832   20887 command_runner.go:130] > -rw------- 1 root root 2059 Jul 28 22:06 /etc/kubernetes/kubelet.conf
	I0728 15:10:24.103849   20887 command_runner.go:130] > -rw------- 1 root root 5600 Jul 28 22:06 /etc/kubernetes/scheduler.conf
	I0728 15:10:24.104519   20887 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 28 22:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 28 22:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Jul 28 22:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 28 22:06 /etc/kubernetes/scheduler.conf
	
	I0728 15:10:24.104568   20887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 15:10:24.111292   20887 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0728 15:10:24.111910   20887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 15:10:24.118662   20887 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0728 15:10:24.119489   20887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 15:10:24.126384   20887 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:24.126429   20887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 15:10:24.133425   20887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 15:10:24.140649   20887 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:24.140700   20887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 15:10:24.147327   20887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:10:24.154358   20887 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 15:10:24.154370   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:10:24.193832   20887 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0728 15:10:24.193844   20887 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0728 15:10:24.194373   20887 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0728 15:10:24.194669   20887 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0728 15:10:24.195001   20887 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0728 15:10:24.195324   20887 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0728 15:10:24.195475   20887 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0728 15:10:24.195896   20887 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0728 15:10:24.196378   20887 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0728 15:10:24.196727   20887 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0728 15:10:24.197009   20887 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0728 15:10:24.197373   20887 command_runner.go:130] > [certs] Using the existing "sa" key
	I0728 15:10:24.200570   20887 command_runner.go:130] ! W0728 22:10:24.193675    1084 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:10:24.200588   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:10:24.239367   20887 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0728 15:10:24.372852   20887 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0728 15:10:24.637750   20887 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0728 15:10:24.924617   20887 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0728 15:10:25.188133   20887 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0728 15:10:25.192024   20887 command_runner.go:130] ! W0728 22:10:24.238810    1094 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:10:25.192054   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:10:25.287921   20887 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 15:10:25.288969   20887 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 15:10:25.288977   20887 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0728 15:10:25.362289   20887 command_runner.go:130] ! W0728 22:10:25.231047    1117 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:10:25.362321   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:10:25.398362   20887 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0728 15:10:25.398375   20887 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0728 15:10:25.402342   20887 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0728 15:10:25.403201   20887 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0728 15:10:25.406925   20887 command_runner.go:130] ! W0728 22:10:25.398196    1161 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:10:25.406953   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:10:25.445694   20887 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0728 15:10:25.456452   20887 command_runner.go:130] ! W0728 22:10:25.446485    1174 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:10:25.456481   20887 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:10:25.456537   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:10:25.966687   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:10:26.466663   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:10:26.485240   20887 command_runner.go:130] > 1658
	I0728 15:10:26.485265   20887 api_server.go:71] duration metric: took 1.028783006s to wait for apiserver process to appear ...
	I0728 15:10:26.485285   20887 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:10:26.485308   20887 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0728 15:10:26.487124   20887 api_server.go:256] stopped: https://127.0.0.1:56607/healthz: Get "https://127.0.0.1:56607/healthz": EOF
	I0728 15:10:26.988185   20887 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0728 15:10:29.359701   20887 api_server.go:266] https://127.0.0.1:56607/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 15:10:29.359716   20887 api_server.go:102] status: https://127.0.0.1:56607/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 15:10:29.487341   20887 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0728 15:10:29.494429   20887 api_server.go:266] https://127.0.0.1:56607/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:10:29.494447   20887 api_server.go:102] status: https://127.0.0.1:56607/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:10:29.987374   20887 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0728 15:10:29.995683   20887 api_server.go:266] https://127.0.0.1:56607/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:10:29.995701   20887 api_server.go:102] status: https://127.0.0.1:56607/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:10:30.487290   20887 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0728 15:10:30.493116   20887 api_server.go:266] https://127.0.0.1:56607/healthz returned 200:
	ok
	I0728 15:10:30.493178   20887 round_trippers.go:463] GET https://127.0.0.1:56607/version
	I0728 15:10:30.493186   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:30.493194   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:30.493200   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:30.499482   20887 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0728 15:10:30.499493   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:30.499498   20887 round_trippers.go:580]     Audit-Id: 6fbd46fc-5a70-4aa3-a4c2-8eed5b981815
	I0728 15:10:30.499504   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:30.499508   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:30.499513   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:30.499518   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:30.499522   20887 round_trippers.go:580]     Content-Length: 263
	I0728 15:10:30.499527   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:30 GMT
	I0728 15:10:30.499545   20887 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "24",
	  "gitVersion": "v1.24.3",
	  "gitCommit": "aef86a93758dc3cb2c658dd9657ab4ad4afc21cb",
	  "gitTreeState": "clean",
	  "buildDate": "2022-07-13T14:23:26Z",
	  "goVersion": "go1.18.3",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0728 15:10:30.499591   20887 api_server.go:140] control plane version: v1.24.3
	I0728 15:10:30.499602   20887 api_server.go:130] duration metric: took 4.014306737s to wait for apiserver health ...
	I0728 15:10:30.499607   20887 cni.go:95] Creating CNI manager for ""
	I0728 15:10:30.499611   20887 cni.go:156] 3 nodes found, recommending kindnet
	I0728 15:10:30.520847   20887 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0728 15:10:30.542298   20887 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0728 15:10:30.547667   20887 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0728 15:10:30.547678   20887 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0728 15:10:30.547687   20887 command_runner.go:130] > Device: 8eh/142d	Inode: 267113      Links: 1
	I0728 15:10:30.547696   20887 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0728 15:10:30.547702   20887 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0728 15:10:30.547707   20887 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0728 15:10:30.547711   20887 command_runner.go:130] > Change: 2022-07-28 21:39:58.672799402 +0000
	I0728 15:10:30.547715   20887 command_runner.go:130] >  Birth: -
	I0728 15:10:30.547754   20887 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ...
	I0728 15:10:30.547761   20887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0728 15:10:30.561034   20887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0728 15:10:31.276669   20887 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0728 15:10:31.279451   20887 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0728 15:10:31.283993   20887 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0728 15:10:31.295794   20887 command_runner.go:130] > daemonset.apps/kindnet configured
	I0728 15:10:31.352052   20887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:10:31.352130   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:10:31.352137   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.352146   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.352154   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.356002   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:31.356021   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.356029   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.356039   20887 round_trippers.go:580]     Audit-Id: 00f61dd6-bf01-4638-b451-aeeec006f1d1
	I0728 15:10:31.356045   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.356052   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.356062   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.356072   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.357186   20887 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"691"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"411","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},
"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".": [truncated 83373 chars]
	I0728 15:10:31.360127   20887 system_pods.go:59] 12 kube-system pods found
	I0728 15:10:31.360143   20887 system_pods.go:61] "coredns-6d4b75cb6d-dfxk7" [ea8a6018-c281-45ec-bbb7-19f2988aa884] Running
	I0728 15:10:31.360147   20887 system_pods.go:61] "etcd-multinode-20220728150610-12923" [2d4683e7-2e93-41c5-af51-5181a7c29edd] Running
	I0728 15:10:31.360151   20887 system_pods.go:61] "kindnet-52mvf" [ef5b2400-09e0-4d0c-98b9-d520fd42e827] Running
	I0728 15:10:31.360154   20887 system_pods.go:61] "kindnet-tlp2m" [b535556f-fbbe-4220-9037-5016f1b8fb51] Running
	I0728 15:10:31.360157   20887 system_pods.go:61] "kindnet-v5hq8" [3410f1c1-9947-4a08-8503-660caf65dc5c] Running
	I0728 15:10:31.360162   20887 system_pods.go:61] "kube-apiserver-multinode-20220728150610-12923" [34425f5f-5cbc-4e7c-89b3-e4758c44f162] Running
	I0728 15:10:31.360165   20887 system_pods.go:61] "kube-controller-manager-multinode-20220728150610-12923" [92841ab9-f773-435e-a133-794e0d8e0cef] Running
	I0728 15:10:31.360168   20887 system_pods.go:61] "kube-proxy-bxdk6" [befca8fa-aef6-415a-b033-8522067db320] Running
	I0728 15:10:31.360171   20887 system_pods.go:61] "kube-proxy-cdz7z" [f9727653-ed51-43f3-95ad-fd2f5fb0ac6e] Running
	I0728 15:10:31.360174   20887 system_pods.go:61] "kube-proxy-cn9x2" [813dc8a0-2ea3-4ee9-83ce-fe09ccf38295] Running
	I0728 15:10:31.360179   20887 system_pods.go:61] "kube-scheduler-multinode-20220728150610-12923" [ef5d84ce-4249-4af0-b1be-7a3d7f8c2205] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 15:10:31.360186   20887 system_pods.go:61] "storage-provisioner" [29238934-2c0b-4262-80ff-12975d44a715] Running
	I0728 15:10:31.360189   20887 system_pods.go:74] duration metric: took 8.125685ms to wait for pod list to return data ...
	I0728 15:10:31.360196   20887 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:10:31.360228   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes
	I0728 15:10:31.360232   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.360238   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.360243   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.362649   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:31.362662   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.362670   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.362697   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.362724   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.362736   20887 round_trippers.go:580]     Audit-Id: 99e2d4b1-bb2e-4f02-affc-de2fc524e449
	I0728 15:10:31.362743   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.362750   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.363193   20887 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"691"},"items":[{"metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-m
anaged-attach-detach":"true"},"managedFields":[{"manager":"kubelet","op [truncated 16208 chars]
	I0728 15:10:31.363825   20887 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:10:31.363837   20887 node_conditions.go:123] node cpu capacity is 6
	I0728 15:10:31.363848   20887 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:10:31.363852   20887 node_conditions.go:123] node cpu capacity is 6
	I0728 15:10:31.363855   20887 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:10:31.363859   20887 node_conditions.go:123] node cpu capacity is 6
	I0728 15:10:31.363862   20887 node_conditions.go:105] duration metric: took 3.663155ms to run NodePressure ...
	I0728 15:10:31.363874   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:10:31.568994   20887 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0728 15:10:31.669716   20887 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0728 15:10:31.676783   20887 command_runner.go:130] ! W0728 22:10:31.416577    2222 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:10:31.676806   20887 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0728 15:10:31.676865   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0728 15:10:31.676870   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.676877   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.676883   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.680864   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:31.680877   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.680883   20887 round_trippers.go:580]     Audit-Id: a8ee54f4-126d-4750-876c-535e354a839b
	I0728 15:10:31.680887   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.680892   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.680897   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.680901   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.680905   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.681089   20887 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"693"},"items":[{"metadata":{"name":"etcd-multinode-20220728150610-12923","namespace":"kube-system","uid":"2d4683e7-2e93-41c5-af51-5181a7c29edd","resourceVersion":"319","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"86483540902094820b1a0cc29f31f22f","kubernetes.io/config.mirror":"86483540902094820b1a0cc29f31f22f","kubernetes.io/config.seen":"2022-07-28T22:06:37.255020292Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time"
:"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata [truncated 30755 chars]
	I0728 15:10:31.681831   20887 kubeadm.go:777] kubelet initialised
	I0728 15:10:31.681839   20887 kubeadm.go:778] duration metric: took 5.018574ms waiting for restarted kubelet to initialise ...
	I0728 15:10:31.681846   20887 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:10:31.681877   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:10:31.681880   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.681886   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.681892   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.685268   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:31.685284   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.685292   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.685300   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.685319   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.685333   20887 round_trippers.go:580]     Audit-Id: 62e9b6ef-3db6-474b-9b59-cc489e919cc2
	I0728 15:10:31.685342   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.685348   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.687157   20887 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"693"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"411","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},
"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".": [truncated 83840 chars]
	I0728 15:10:31.689057   20887 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.689106   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:31.689110   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.689116   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.689122   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.692042   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:31.692055   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.692061   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.692066   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.692070   20887 round_trippers.go:580]     Audit-Id: 37604e65-07bb-46b1-9c4a-2ba090b8e744
	I0728 15:10:31.692075   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.692079   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.692085   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.692145   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"411","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 5982 chars]
	I0728 15:10:31.692410   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:31.692418   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.692427   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.692442   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.694782   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:31.694807   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.694816   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.694823   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.694831   20887 round_trippers.go:580]     Audit-Id: 8e58f15b-c0ca-421c-99c9-35d8fa67519d
	I0728 15:10:31.694846   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.694862   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.694871   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.694958   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:31.695169   20887 pod_ready.go:92] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:31.695179   20887 pod_ready.go:81] duration metric: took 6.109081ms waiting for pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.695187   20887 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.695222   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/etcd-multinode-20220728150610-12923
	I0728 15:10:31.695229   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.695235   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.695243   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.697682   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:31.697695   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.697701   20887 round_trippers.go:580]     Audit-Id: e0f33fc5-f37d-4352-9d28-292ba24301c3
	I0728 15:10:31.697706   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.697710   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.697716   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.697720   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.697725   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.697783   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220728150610-12923","namespace":"kube-system","uid":"2d4683e7-2e93-41c5-af51-5181a7c29edd","resourceVersion":"319","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"86483540902094820b1a0cc29f31f22f","kubernetes.io/config.mirror":"86483540902094820b1a0cc29f31f22f","kubernetes.io/config.seen":"2022-07-28T22:06:37.255020292Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fi
eldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io [truncated 5843 chars]
	I0728 15:10:31.698038   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:31.698044   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.698050   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.698057   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.701269   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:31.701283   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.701289   20887 round_trippers.go:580]     Audit-Id: 3a2fb442-6199-423b-bc4b-dc2100732ffd
	I0728 15:10:31.701293   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.701298   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.701305   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.701312   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.701319   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.701375   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:31.701591   20887 pod_ready.go:92] pod "etcd-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:31.701600   20887 pod_ready.go:81] duration metric: took 6.4065ms waiting for pod "etcd-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.701611   20887 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.701645   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220728150610-12923
	I0728 15:10:31.701650   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.701655   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.701661   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.704654   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:31.704665   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.704671   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.704676   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.704683   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.704689   20887 round_trippers.go:580]     Audit-Id: 24cf915d-9029-4a3a-aa2a-75b2690c4ec4
	I0728 15:10:31.704693   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.704699   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.704758   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220728150610-12923","namespace":"kube-system","uid":"34425f5f-5cbc-4e7c-89b3-e4758c44f162","resourceVersion":"281","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"f17cd4a02884221436b424aa5c9008ee","kubernetes.io/config.mirror":"f17cd4a02884221436b424aa5c9008ee","kubernetes.io/config.seen":"2022-07-28T22:06:37.255021189Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z",
"fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".": [truncated 8310 chars]
	I0728 15:10:31.705043   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:31.705050   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.705056   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.705061   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.709251   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:31.709263   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.709269   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.709273   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.709278   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.709282   20887 round_trippers.go:580]     Audit-Id: 5807c750-54cb-454d-a8d7-52cffd414413
	I0728 15:10:31.709287   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.709292   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.709504   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:31.709717   20887 pod_ready.go:92] pod "kube-apiserver-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:31.709726   20887 pod_ready.go:81] duration metric: took 8.109388ms waiting for pod "kube-apiserver-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.709734   20887 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.709776   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:31.709784   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.709793   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.709800   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.753692   20887 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0728 15:10:31.753723   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.753741   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.753763   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.753786   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.753798   20887 round_trippers.go:580]     Audit-Id: ab7bdc65-a664-43bf-bfb9-a52c9c3c8a63
	I0728 15:10:31.753817   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.753837   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.754758   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:31.755236   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:31.755246   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.755254   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.755305   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.759738   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:31.759752   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.759758   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.759765   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.759771   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.759776   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.759781   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.759786   20887 round_trippers.go:580]     Audit-Id: c1f9fda3-fa61-4aed-917c-361d01e05d00
	I0728 15:10:31.759840   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:32.261623   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:32.261644   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:32.261656   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:32.261666   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:32.265093   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:32.265118   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:32.265130   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:32 GMT
	I0728 15:10:32.265144   20887 round_trippers.go:580]     Audit-Id: c093c7ec-ac70-463f-b163-2abf7436d48d
	I0728 15:10:32.265151   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:32.265157   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:32.265167   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:32.265174   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:32.265458   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:32.265743   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:32.265749   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:32.265755   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:32.265761   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:32.267930   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:32.267941   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:32.267950   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:32.267955   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:32.267960   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:32.267964   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:32 GMT
	I0728 15:10:32.267969   20887 round_trippers.go:580]     Audit-Id: 75c0f484-c374-47c3-ab0b-555277c52548
	I0728 15:10:32.267973   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:32.268799   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:32.761227   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:32.761254   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:32.761266   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:32.761276   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:32.765662   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:32.765677   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:32.765701   20887 round_trippers.go:580]     Audit-Id: 9cfe05eb-92d4-4131-aaeb-cc5ca2ba84bf
	I0728 15:10:32.765706   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:32.765711   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:32.765716   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:32.765721   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:32.765725   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:32 GMT
	I0728 15:10:32.765796   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:32.766103   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:32.766110   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:32.766115   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:32.766121   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:32.767765   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:32.767780   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:32.767793   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:32.767803   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:32.767810   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:32 GMT
	I0728 15:10:32.767819   20887 round_trippers.go:580]     Audit-Id: a30f8ace-f8a7-4f3c-960e-53eb105eb364
	I0728 15:10:32.767825   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:32.767833   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:32.768097   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:33.261918   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:33.261938   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:33.261950   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:33.261960   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:33.265834   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:33.265849   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:33.265857   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:33.265863   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:33 GMT
	I0728 15:10:33.265871   20887 round_trippers.go:580]     Audit-Id: 620b0f05-7cbe-4d99-9496-e17f80c3f1e2
	I0728 15:10:33.265877   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:33.265884   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:33.265890   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:33.265972   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:33.266299   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:33.266305   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:33.266311   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:33.266316   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:33.268509   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:33.268519   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:33.268526   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:33.268532   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:33 GMT
	I0728 15:10:33.268536   20887 round_trippers.go:580]     Audit-Id: c903f6be-cbe7-4db0-9f1a-92df74e99c36
	I0728 15:10:33.268541   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:33.268546   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:33.268550   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:33.268598   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:33.760686   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:33.760707   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:33.760719   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:33.760729   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:33.764510   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:33.764520   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:33.764526   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:33.764531   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:33.764535   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:33 GMT
	I0728 15:10:33.764549   20887 round_trippers.go:580]     Audit-Id: 22d6312f-3fac-46db-b008-757b79f23127
	I0728 15:10:33.764554   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:33.764561   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:33.764953   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:33.765254   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:33.765260   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:33.765266   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:33.765271   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:33.767129   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:33.767138   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:33.767143   20887 round_trippers.go:580]     Audit-Id: f873cb6f-bd0d-4c8f-ad2a-a8d1f430e215
	I0728 15:10:33.767148   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:33.767153   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:33.767158   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:33.767165   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:33.767175   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:33 GMT
	I0728 15:10:33.767454   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:33.767643   20887 pod_ready.go:102] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:34.261518   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:34.261540   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:34.261553   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:34.261564   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:34.265390   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:34.265406   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:34.265414   20887 round_trippers.go:580]     Audit-Id: 183568fa-2c80-4fa8-99cb-fae4a4ed516d
	I0728 15:10:34.265420   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:34.265426   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:34.265433   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:34.265439   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:34.265445   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:34 GMT
	I0728 15:10:34.265524   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:34.267000   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:34.267024   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:34.267057   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:34.267197   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:34.269032   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:34.269041   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:34.269046   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:34.269050   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:34.269055   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:34.269060   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:34.269065   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:34 GMT
	I0728 15:10:34.269069   20887 round_trippers.go:580]     Audit-Id: 39cac272-2d8d-4b0b-9afc-cb2d89210379
	I0728 15:10:34.269111   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:34.760554   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:34.760570   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:34.760577   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:34.760582   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:34.763313   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:34.763326   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:34.763332   20887 round_trippers.go:580]     Audit-Id: 5199507f-c112-4185-8a5a-46992e91b5b4
	I0728 15:10:34.763341   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:34.763347   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:34.763351   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:34.763356   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:34.763361   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:34 GMT
	I0728 15:10:34.763430   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:34.763723   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:34.763730   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:34.763736   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:34.763742   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:34.765762   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:34.765771   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:34.765778   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:34.765783   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:34.765788   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:34 GMT
	I0728 15:10:34.765793   20887 round_trippers.go:580]     Audit-Id: e7cbc4e5-7086-406b-b00d-866bf15ed69e
	I0728 15:10:34.765798   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:34.765803   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:34.765861   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:35.262169   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:35.262195   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:35.262207   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:35.262218   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:35.266317   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:35.266332   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:35.266340   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:35.266347   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:35 GMT
	I0728 15:10:35.266354   20887 round_trippers.go:580]     Audit-Id: 51b10cbe-a099-4ae1-a2ac-4f69f131e42a
	I0728 15:10:35.266360   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:35.266366   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:35.266375   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:35.266450   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:35.266727   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:35.266733   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:35.266739   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:35.266744   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:35.268577   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:35.268585   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:35.268590   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:35.268597   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:35.268607   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:35 GMT
	I0728 15:10:35.268623   20887 round_trippers.go:580]     Audit-Id: 6a894c42-dfbf-4a9c-a9aa-4478929778fa
	I0728 15:10:35.268634   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:35.268665   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:35.268908   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:35.760463   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:35.760485   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:35.760497   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:35.760507   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:35.764570   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:35.764587   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:35.764598   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:35.764607   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:35 GMT
	I0728 15:10:35.764615   20887 round_trippers.go:580]     Audit-Id: 677f457b-4fc3-49ba-a446-cf72f491f4f3
	I0728 15:10:35.764625   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:35.764633   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:35.764639   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:35.764712   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:35.764986   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:35.764994   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:35.765000   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:35.765008   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:35.766971   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:35.766980   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:35.766986   20887 round_trippers.go:580]     Audit-Id: 4ad89767-e54c-48d4-8b95-01a5b5b96325
	I0728 15:10:35.766993   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:35.766999   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:35.767005   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:35.767013   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:35.767019   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:35 GMT
	I0728 15:10:35.767144   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:36.260873   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:36.260894   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:36.260906   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:36.260915   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:36.264388   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:36.264407   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:36.264421   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:36.264434   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:36.264453   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:36.264464   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:36.264473   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:36 GMT
	I0728 15:10:36.264480   20887 round_trippers.go:580]     Audit-Id: 5df957a5-9aa6-420e-b575-a0124306c220
	I0728 15:10:36.264656   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:36.265027   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:36.265037   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:36.265045   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:36.265052   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:36.267191   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:36.267202   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:36.267210   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:36.267217   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:36.267222   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:36 GMT
	I0728 15:10:36.267229   20887 round_trippers.go:580]     Audit-Id: 073f3c6b-ebad-41b9-84de-90929d6959fc
	I0728 15:10:36.267234   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:36.267238   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:36.267294   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:36.267484   20887 pod_ready.go:102] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:36.761570   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:36.761590   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:36.761606   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:36.761617   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:36.765861   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:36.765890   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:36.765896   20887 round_trippers.go:580]     Audit-Id: 1efca274-96b6-4337-b334-a56863d2a131
	I0728 15:10:36.765900   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:36.765905   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:36.765909   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:36.765914   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:36.765918   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:36 GMT
	I0728 15:10:36.765997   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:36.766270   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:36.766276   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:36.766284   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:36.766290   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:36.768016   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:36.768026   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:36.768032   20887 round_trippers.go:580]     Audit-Id: 8263e809-97ab-41ab-a5a3-4d71fc4ab76a
	I0728 15:10:36.768037   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:36.768041   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:36.768046   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:36.768051   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:36.768056   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:36 GMT
	I0728 15:10:36.768109   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:37.260211   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:37.260224   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:37.260230   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:37.260235   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:37.262594   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:37.262606   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:37.262611   20887 round_trippers.go:580]     Audit-Id: 107272f5-40d3-4c19-a419-e583886f24e8
	I0728 15:10:37.262616   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:37.262623   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:37.262629   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:37.262634   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:37.262639   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:37 GMT
	I0728 15:10:37.262733   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:37.263085   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:37.263092   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:37.263098   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:37.263104   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:37.265022   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:37.265038   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:37.265044   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:37.265049   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:37.265053   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:37.265058   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:37 GMT
	I0728 15:10:37.265065   20887 round_trippers.go:580]     Audit-Id: 19d85165-057c-436f-ba0d-6854fe197346
	I0728 15:10:37.265071   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:37.265121   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:37.760441   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:37.760463   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:37.760476   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:37.760486   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:37.764964   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:37.764977   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:37.764983   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:37.764988   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:37.764993   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:37 GMT
	I0728 15:10:37.764998   20887 round_trippers.go:580]     Audit-Id: 55bea9dc-69fb-4198-acd2-4a4e4ac5c756
	I0728 15:10:37.765003   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:37.765007   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:37.765090   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:37.765380   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:37.765387   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:37.765393   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:37.765398   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:37.767222   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:37.767231   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:37.767236   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:37.767242   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:37 GMT
	I0728 15:10:37.767246   20887 round_trippers.go:580]     Audit-Id: 1b40b039-85f0-4264-b49f-07fe55f03ace
	I0728 15:10:37.767251   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:37.767255   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:37.767260   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:37.767305   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:38.260565   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:38.260585   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:38.260598   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:38.260608   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:38.264461   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:38.264479   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:38.264489   20887 round_trippers.go:580]     Audit-Id: 1bdcdbba-6073-4f99-921c-813ac4e1757a
	I0728 15:10:38.264499   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:38.264508   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:38.264515   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:38.264521   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:38.264526   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:38 GMT
	I0728 15:10:38.265093   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:38.265394   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:38.265401   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:38.265407   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:38.265413   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:38.267267   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:38.267278   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:38.267285   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:38.267290   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:38.267299   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:38.267304   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:38.267309   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:38 GMT
	I0728 15:10:38.267313   20887 round_trippers.go:580]     Audit-Id: ae5f0c88-3b5c-4587-bd74-4aa339f01e38
	I0728 15:10:38.267620   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:38.267801   20887 pod_ready.go:102] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:38.761718   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:38.761738   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:38.761750   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:38.761761   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:38.766233   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:38.766245   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:38.766251   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:38 GMT
	I0728 15:10:38.766255   20887 round_trippers.go:580]     Audit-Id: 89894f36-e222-4441-80f5-c67e2ac4e6d5
	I0728 15:10:38.766260   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:38.766265   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:38.766270   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:38.766276   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:38.766338   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:38.766614   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:38.766621   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:38.766627   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:38.766633   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:38.768463   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:38.768473   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:38.768480   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:38.768484   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:38.768489   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:38.768495   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:38 GMT
	I0728 15:10:38.768499   20887 round_trippers.go:580]     Audit-Id: 1fc846b9-1375-49e9-bf89-27c2439c0dcb
	I0728 15:10:38.768509   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:38.768792   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:39.260347   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:39.260369   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:39.260382   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:39.260392   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:39.264749   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:39.264764   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:39.264772   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:39 GMT
	I0728 15:10:39.264777   20887 round_trippers.go:580]     Audit-Id: 4cc281ca-e737-4951-a75b-0423e9fcc720
	I0728 15:10:39.264781   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:39.264786   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:39.264790   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:39.264794   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:39.264855   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:39.265163   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:39.265170   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:39.265176   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:39.265181   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:39.267056   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:39.267065   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:39.267070   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:39 GMT
	I0728 15:10:39.267075   20887 round_trippers.go:580]     Audit-Id: 9819eb6f-0da9-44f6-b73f-93419d35783d
	I0728 15:10:39.267081   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:39.267085   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:39.267090   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:39.267094   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:39.267143   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:39.760262   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:39.760275   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:39.760281   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:39.760286   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:39.762964   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:39.762973   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:39.762979   20887 round_trippers.go:580]     Audit-Id: 34c971e5-f5e0-4b8a-91c5-0228cf20c5f2
	I0728 15:10:39.762983   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:39.762988   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:39.762993   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:39.762997   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:39.763001   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:39 GMT
	I0728 15:10:39.763055   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:39.763335   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:39.763341   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:39.763347   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:39.763352   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:39.764935   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:39.764944   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:39.764950   20887 round_trippers.go:580]     Audit-Id: 11e4902e-f9fa-4b0c-9d97-b660a6f1fb6e
	I0728 15:10:39.764957   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:39.764965   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:39.764970   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:39.764975   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:39.764980   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:39 GMT
	I0728 15:10:39.765239   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:40.260510   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:40.260533   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:40.260544   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:40.260554   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:40.263829   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:40.263839   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:40.263845   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:40.263850   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:40.263855   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:40.263859   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:40 GMT
	I0728 15:10:40.263864   20887 round_trippers.go:580]     Audit-Id: 31c1b84e-36ed-407f-b416-7fb5167a0fd6
	I0728 15:10:40.263869   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:40.264253   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:40.264552   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:40.264562   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:40.264568   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:40.264574   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:40.266510   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:40.266518   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:40.266524   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:40 GMT
	I0728 15:10:40.266528   20887 round_trippers.go:580]     Audit-Id: 20c31cb5-8d4d-4ec3-9f88-ce1e534b5287
	I0728 15:10:40.266533   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:40.266538   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:40.266545   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:40.266551   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:40.266716   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:40.762055   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:40.762077   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:40.762090   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:40.762101   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:40.765894   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:40.765909   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:40.765917   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:40.765923   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:40.765929   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:40 GMT
	I0728 15:10:40.765936   20887 round_trippers.go:580]     Audit-Id: 546dcaee-7603-4ad8-8dd2-a353ed34d5c4
	I0728 15:10:40.765943   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:40.765950   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:40.766037   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:40.766396   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:40.766402   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:40.766408   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:40.766413   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:40.768459   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:40.768468   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:40.768473   20887 round_trippers.go:580]     Audit-Id: 185e3fdd-6e08-4a9f-b490-2c2ae3a0a826
	I0728 15:10:40.768478   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:40.768483   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:40.768487   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:40.768492   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:40.768497   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:40 GMT
	I0728 15:10:40.768541   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:40.768723   20887 pod_ready.go:102] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:41.260288   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:41.260308   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:41.260319   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:41.260329   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:41.263397   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:41.263409   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:41.263414   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:41.263419   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:41.263423   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:41.263428   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:41.263432   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:41 GMT
	I0728 15:10:41.263437   20887 round_trippers.go:580]     Audit-Id: 7ef9496c-bde5-457f-aea4-a6fd3e54cb77
	I0728 15:10:41.263607   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:41.263895   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:41.263901   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:41.263907   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:41.263912   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:41.268271   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:41.268282   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:41.268288   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:41.268292   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:41.268296   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:41.268301   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:41 GMT
	I0728 15:10:41.268305   20887 round_trippers.go:580]     Audit-Id: 2bebbf52-31ad-4954-9def-e6dc05c818dc
	I0728 15:10:41.268309   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:41.268354   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:41.760293   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:41.760306   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:41.760313   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:41.760317   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:41.762310   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:41.762321   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:41.762328   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:41.762340   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:41.762346   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:41 GMT
	I0728 15:10:41.762352   20887 round_trippers.go:580]     Audit-Id: 0d3a470f-139f-4726-b071-cd11ad183752
	I0728 15:10:41.762360   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:41.762365   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:41.762426   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:41.762718   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:41.762725   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:41.762731   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:41.762737   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:41.764906   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:41.764917   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:41.764923   20887 round_trippers.go:580]     Audit-Id: 4b119874-3e41-4874-9906-09cd53ca8abc
	I0728 15:10:41.764928   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:41.764932   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:41.764937   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:41.764942   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:41.764947   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:41 GMT
	I0728 15:10:41.764999   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:42.260145   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:42.260161   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:42.260172   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:42.260178   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:42.262421   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:42.262431   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:42.262438   20887 round_trippers.go:580]     Audit-Id: 99cd325c-24b9-4cd5-8538-5b043d4f9519
	I0728 15:10:42.262442   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:42.262448   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:42.262452   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:42.262457   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:42.262463   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:42 GMT
	I0728 15:10:42.262522   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:42.262798   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:42.262805   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:42.262810   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:42.262815   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:42.264642   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:42.264651   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:42.264658   20887 round_trippers.go:580]     Audit-Id: fe1119d5-bf81-4f81-a3ae-809bc38953e8
	I0728 15:10:42.264663   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:42.264668   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:42.264673   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:42.264677   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:42.264682   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:42 GMT
	I0728 15:10:42.264729   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:42.760577   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:42.760599   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:42.760638   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:42.760647   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:42.764005   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:42.764018   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:42.764024   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:42.764029   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:42.764034   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:42.764038   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:42.764043   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:42 GMT
	I0728 15:10:42.764048   20887 round_trippers.go:580]     Audit-Id: 9522f679-a4e0-4ace-8e63-eb68b51e9cd4
	I0728 15:10:42.764111   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:42.764391   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:42.764397   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:42.764403   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:42.764407   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:42.766158   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:42.766167   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:42.766173   20887 round_trippers.go:580]     Audit-Id: 376884b1-4498-4a6e-a910-d0a1c9642c16
	I0728 15:10:42.766178   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:42.766183   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:42.766188   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:42.766194   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:42.766198   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:42 GMT
	I0728 15:10:42.766242   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:43.260319   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:43.260392   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:43.260403   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:43.260410   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:43.263077   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:43.263087   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:43.263093   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:43.263109   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:43.263117   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:43 GMT
	I0728 15:10:43.263122   20887 round_trippers.go:580]     Audit-Id: adc14e05-1ea8-48a5-a6be-32edc9b0d323
	I0728 15:10:43.263126   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:43.263133   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:43.263191   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:43.263466   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:43.263472   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:43.263478   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:43.263483   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:43.265360   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:43.265370   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:43.265376   20887 round_trippers.go:580]     Audit-Id: 3c70ca6f-608c-4122-b6ba-887a8f54397c
	I0728 15:10:43.265381   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:43.265386   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:43.265391   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:43.265395   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:43.265399   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:43 GMT
	I0728 15:10:43.265442   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:43.265622   20887 pod_ready.go:102] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:43.760526   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:43.760546   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:43.760559   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:43.760568   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:43.764934   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:43.764947   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:43.764952   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:43.764957   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:43.764961   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:43.764966   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:43 GMT
	I0728 15:10:43.764970   20887 round_trippers.go:580]     Audit-Id: 64e7eb15-d054-4789-9267-3913bc178aa3
	I0728 15:10:43.764975   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:43.765037   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:43.765315   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:43.765322   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:43.765327   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:43.765332   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:43.767218   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:43.767232   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:43.767243   20887 round_trippers.go:580]     Audit-Id: a2c14bbd-33d0-4c6a-988e-f06b17ebef0a
	I0728 15:10:43.767249   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:43.767254   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:43.767259   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:43.767264   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:43.767269   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:43 GMT
	I0728 15:10:43.767316   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:44.260257   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:44.260276   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:44.260285   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:44.260292   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:44.263075   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:44.263086   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:44.263091   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:44.263112   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:44.263120   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:44 GMT
	I0728 15:10:44.263125   20887 round_trippers.go:580]     Audit-Id: 350a3cfb-4492-408d-901e-ed1c90b65f45
	I0728 15:10:44.263129   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:44.263134   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:44.263190   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:44.263466   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:44.263472   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:44.263478   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:44.263483   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:44.265242   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:44.265255   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:44.265264   20887 round_trippers.go:580]     Audit-Id: 78393d54-4057-4ba2-bc12-1d9a27f67bc0
	I0728 15:10:44.265272   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:44.265279   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:44.265286   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:44.265290   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:44.265297   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:44 GMT
	I0728 15:10:44.265704   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:44.760292   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:44.760318   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:44.760330   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:44.760340   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:44.763840   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:44.763853   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:44.763859   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:44.763863   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:44 GMT
	I0728 15:10:44.763873   20887 round_trippers.go:580]     Audit-Id: 7411ab8f-6363-4449-b1fd-eff3befe1225
	I0728 15:10:44.763877   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:44.763882   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:44.763886   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:44.764178   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:44.764582   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:44.764591   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:44.764597   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:44.764602   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:44.766568   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:44.766581   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:44.766594   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:44.766599   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:44 GMT
	I0728 15:10:44.766604   20887 round_trippers.go:580]     Audit-Id: ff48eae8-bd21-441d-b53e-14eef268f914
	I0728 15:10:44.766609   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:44.766613   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:44.766618   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:44.766801   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:45.260244   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:45.260261   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.260270   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.260278   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.262774   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:45.262784   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.262790   20887 round_trippers.go:580]     Audit-Id: 2fd6f01e-c019-4aab-8309-950a00f73440
	I0728 15:10:45.262794   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.262798   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.262802   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.262807   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.262812   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.263014   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"778","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8090 chars]
	I0728 15:10:45.263294   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:45.263301   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.263306   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.263312   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.265004   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.265013   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.265018   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.265022   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.265027   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.265032   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.265037   20887 round_trippers.go:580]     Audit-Id: fa4eede5-189c-47ff-ba76-8f675fde9392
	I0728 15:10:45.265042   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.265394   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:45.265577   20887 pod_ready.go:92] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:45.265589   20887 pod_ready.go:81] duration metric: took 13.555908334s waiting for pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.265597   20887 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bxdk6" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.265626   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-bxdk6
	I0728 15:10:45.265632   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.265645   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.265651   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.267540   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.267549   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.267554   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.267560   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.267564   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.267569   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.267574   20887 round_trippers.go:580]     Audit-Id: 7b319a0e-7e40-4dc7-956c-15ab92fc7fa4
	I0728 15:10:45.267579   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.267617   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bxdk6","generateName":"kube-proxy-","namespace":"kube-system","uid":"befca8fa-aef6-415a-b033-8522067db320","resourceVersion":"474","creationTimestamp":"2022-07-28T22:07:45Z","labels":{"controller-revision-hash":"94985b49","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:07:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5548 chars]
	I0728 15:10:45.267844   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923-m02
	I0728 15:10:45.267851   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.267856   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.267861   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.269411   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.269420   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.269425   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.269430   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.269435   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.269439   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.269444   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.269448   20887 round_trippers.go:580]     Audit-Id: 500a00e3-3071-4921-93b0-dc39a3dd37a0
	I0728 15:10:45.269699   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923-m02","uid":"2d4cf78c-ed12-4ef8-9967-e85ae2ffd232","resourceVersion":"556","creationTimestamp":"2022-07-28T22:07:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:07:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:07:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4523 chars]
	I0728 15:10:45.269858   20887 pod_ready.go:92] pod "kube-proxy-bxdk6" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:45.269864   20887 pod_ready.go:81] duration metric: took 4.261143ms waiting for pod "kube-proxy-bxdk6" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.269869   20887 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdz7z" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.269891   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-cdz7z
	I0728 15:10:45.269895   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.269901   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.269906   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.271646   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.271655   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.271660   20887 round_trippers.go:580]     Audit-Id: 7eae9957-76b9-4857-a730-2433bb623c68
	I0728 15:10:45.271664   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.271670   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.271677   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.271682   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.271687   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.271728   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cdz7z","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9727653-ed51-43f3-95ad-fd2f5fb0ac6e","resourceVersion":"704","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"controller-revision-hash":"94985b49","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5747 chars]
	I0728 15:10:45.271953   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:45.271960   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.271965   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.271971   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.273648   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.273658   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.273665   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.273671   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.273676   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.273680   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.273685   20887 round_trippers.go:580]     Audit-Id: d5b6380f-22e8-4b04-96a0-acf892f973e7
	I0728 15:10:45.273689   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.273940   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:45.274120   20887 pod_ready.go:92] pod "kube-proxy-cdz7z" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:45.274127   20887 pod_ready.go:81] duration metric: took 4.253147ms waiting for pod "kube-proxy-cdz7z" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.274132   20887 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cn9x2" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.274155   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-cn9x2
	I0728 15:10:45.274159   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.274165   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.274170   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.275827   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.275836   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.275841   20887 round_trippers.go:580]     Audit-Id: f2dd8787-5605-478d-a9e0-990d788947e4
	I0728 15:10:45.275845   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.275850   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.275854   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.275858   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.275863   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.276088   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cn9x2","generateName":"kube-proxy-","namespace":"kube-system","uid":"813dc8a0-2ea3-4ee9-83ce-fe09ccf38295","resourceVersion":"671","creationTimestamp":"2022-07-28T22:08:39Z","labels":{"controller-revision-hash":"94985b49","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:08:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5755 chars]
	I0728 15:10:45.276313   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923-m03
	I0728 15:10:45.276320   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.276326   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.276332   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.277836   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.277845   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.277850   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.277855   20887 round_trippers.go:580]     Audit-Id: 2ff6c7e6-3fd1-44f4-8676-5b4e545ffad4
	I0728 15:10:45.277860   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.277865   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.277870   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.277875   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.278084   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923-m03","uid":"705fe4c5-d194-48b6-83d4-926ad5fead86","resourceVersion":"686","creationTimestamp":"2022-07-28T22:09:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:09:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:09:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4340 chars]
	I0728 15:10:45.278242   20887 pod_ready.go:92] pod "kube-proxy-cn9x2" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:45.278248   20887 pod_ready.go:81] duration metric: took 4.111555ms waiting for pod "kube-proxy-cn9x2" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.278253   20887 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.278278   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220728150610-12923
	I0728 15:10:45.278282   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.278289   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.278295   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.279986   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.279997   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.280005   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.280012   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.280019   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.280026   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.280032   20887 round_trippers.go:580]     Audit-Id: 925404fc-2cb7-49af-a90c-3a8502b855dd
	I0728 15:10:45.280039   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.280198   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220728150610-12923","namespace":"kube-system","uid":"ef5d84ce-4249-4af0-b1be-7a3d7f8c2205","resourceVersion":"742","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"164dd1e1cbdc7905e66f2af11f156d06","kubernetes.io/config.mirror":"164dd1e1cbdc7905e66f2af11f156d06","kubernetes.io/config.seen":"2022-07-28T22:06:37.255019449Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernete
s.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i [truncated 4974 chars]
	I0728 15:10:45.280403   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:45.280410   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.280417   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.280425   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.282198   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.282209   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.282216   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.282223   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.282230   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.282236   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.282245   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.282253   20887 round_trippers.go:580]     Audit-Id: e65c3f83-62ec-401f-bd44-5cd16b3f1076
	I0728 15:10:45.282292   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:45.282482   20887 pod_ready.go:92] pod "kube-scheduler-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:45.282490   20887 pod_ready.go:81] duration metric: took 4.231427ms waiting for pod "kube-scheduler-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.282495   20887 pod_ready.go:38] duration metric: took 13.6007007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:10:45.282506   20887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 15:10:45.289939   20887 command_runner.go:130] > -16
	I0728 15:10:45.289955   20887 ops.go:34] apiserver oom_adj: -16
	I0728 15:10:45.289959   20887 kubeadm.go:630] restartCluster took 24.369528973s
	I0728 15:10:45.289964   20887 kubeadm.go:397] StartCluster complete in 24.406112915s
	I0728 15:10:45.289975   20887 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:10:45.290052   20887 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:10:45.290409   20887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:10:45.290839   20887 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:10:45.291006   20887 kapi.go:59] client config for multinode-20220728150610-12923: &rest.Config{Host:"https://127.0.0.1:56607", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-2022072815061
0-12923/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:10:45.291190   20887 round_trippers.go:463] GET https://127.0.0.1:56607/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0728 15:10:45.291197   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.291203   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.291209   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.293279   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:45.293289   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.293294   20887 round_trippers.go:580]     Audit-Id: 7bad219c-59cb-48c4-9e4b-0f36d8615f9a
	I0728 15:10:45.293299   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.293304   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.293309   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.293314   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.293318   20887 round_trippers.go:580]     Content-Length: 291
	I0728 15:10:45.293322   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.293380   20887 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6b59e24c-c365-439a-855f-a8318765ac15","resourceVersion":"762","creationTimestamp":"2022-07-28T22:06:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0728 15:10:45.293467   20887 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20220728150610-12923" rescaled to 1
	I0728 15:10:45.293494   20887 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 15:10:45.293511   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 15:10:45.315971   20887 out.go:177] * Verifying Kubernetes components...
	I0728 15:10:45.293525   20887 addons.go:412] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0728 15:10:45.293687   20887 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:10:45.358116   20887 addons.go:65] Setting storage-provisioner=true in profile "multinode-20220728150610-12923"
	I0728 15:10:45.358118   20887 addons.go:65] Setting default-storageclass=true in profile "multinode-20220728150610-12923"
	I0728 15:10:45.358135   20887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:10:45.358141   20887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20220728150610-12923"
	I0728 15:10:45.358145   20887 addons.go:153] Setting addon storage-provisioner=true in "multinode-20220728150610-12923"
	W0728 15:10:45.358187   20887 addons.go:162] addon storage-provisioner should already be in state true
	I0728 15:10:45.358246   20887 host.go:66] Checking if "multinode-20220728150610-12923" exists ...
	I0728 15:10:45.358462   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923 --format={{.State.Status}}
	I0728 15:10:45.358589   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923 --format={{.State.Status}}
	I0728 15:10:45.370224   20887 command_runner.go:130] > apiVersion: v1
	I0728 15:10:45.370249   20887 command_runner.go:130] > data:
	I0728 15:10:45.370255   20887 command_runner.go:130] >   Corefile: |
	I0728 15:10:45.370274   20887 command_runner.go:130] >     .:53 {
	I0728 15:10:45.370290   20887 command_runner.go:130] >         errors
	I0728 15:10:45.370300   20887 command_runner.go:130] >         health {
	I0728 15:10:45.370306   20887 command_runner.go:130] >            lameduck 5s
	I0728 15:10:45.370310   20887 command_runner.go:130] >         }
	I0728 15:10:45.370313   20887 command_runner.go:130] >         ready
	I0728 15:10:45.370320   20887 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0728 15:10:45.370325   20887 command_runner.go:130] >            pods insecure
	I0728 15:10:45.370332   20887 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0728 15:10:45.370339   20887 command_runner.go:130] >            ttl 30
	I0728 15:10:45.370343   20887 command_runner.go:130] >         }
	I0728 15:10:45.370346   20887 command_runner.go:130] >         prometheus :9153
	I0728 15:10:45.370349   20887 command_runner.go:130] >         hosts {
	I0728 15:10:45.370353   20887 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I0728 15:10:45.370359   20887 command_runner.go:130] >            fallthrough
	I0728 15:10:45.370362   20887 command_runner.go:130] >         }
	I0728 15:10:45.370367   20887 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0728 15:10:45.370377   20887 command_runner.go:130] >            max_concurrent 1000
	I0728 15:10:45.370381   20887 command_runner.go:130] >         }
	I0728 15:10:45.370384   20887 command_runner.go:130] >         cache 30
	I0728 15:10:45.370387   20887 command_runner.go:130] >         loop
	I0728 15:10:45.370391   20887 command_runner.go:130] >         reload
	I0728 15:10:45.370396   20887 command_runner.go:130] >         loadbalance
	I0728 15:10:45.370401   20887 command_runner.go:130] >     }
	I0728 15:10:45.370426   20887 command_runner.go:130] > kind: ConfigMap
	I0728 15:10:45.370460   20887 command_runner.go:130] > metadata:
	I0728 15:10:45.370473   20887 command_runner.go:130] >   creationTimestamp: "2022-07-28T22:06:37Z"
	I0728 15:10:45.370480   20887 command_runner.go:130] >   name: coredns
	I0728 15:10:45.370488   20887 command_runner.go:130] >   namespace: kube-system
	I0728 15:10:45.370497   20887 command_runner.go:130] >   resourceVersion: "364"
	I0728 15:10:45.370504   20887 command_runner.go:130] >   uid: d879b80e-f5be-4575-a685-39df1fda8448
	I0728 15:10:45.373627   20887 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0728 15:10:45.373706   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:45.430697   20887 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:10:45.430901   20887 kapi.go:59] client config for multinode-20220728150610-12923: &rest.Config{Host:"https://127.0.0.1:56607", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-2022072815061
0-12923/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:10:45.431168   20887 round_trippers.go:463] GET https://127.0.0.1:56607/apis/storage.k8s.io/v1/storageclasses
	I0728 15:10:45.431175   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.452268   20887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:10:45.452276   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.489179   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.489314   20887 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:10:45.489335   20887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 15:10:45.489464   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:45.493204   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:45.493229   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.493248   20887 round_trippers.go:580]     Audit-Id: 5def914f-74ad-47f7-9f51-e60a5e372ba3
	I0728 15:10:45.493255   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.493266   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.493284   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.493290   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.493295   20887 round_trippers.go:580]     Content-Length: 1273
	I0728 15:10:45.493300   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.493344   20887 request.go:1073] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"782"},"items":[{"metadata":{"name":"standard","uid":"e375d519-9f75-4e2d-8e80-c5ab845c65d4","resourceVersion":"376","creationTimestamp":"2022-07-28T22:06:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-07-28T22:06:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0728 15:10:45.493790   20887 request.go:1073] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"e375d519-9f75-4e2d-8e80-c5ab845c65d4","resourceVersion":"376","creationTimestamp":"2022-07-28T22:06:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-07-28T22:06:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0728 15:10:45.493829   20887 round_trippers.go:463] PUT https://127.0.0.1:56607/apis/storage.k8s.io/v1/storageclasses/standard
	I0728 15:10:45.493838   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.493846   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.493855   20887 round_trippers.go:473]     Content-Type: application/json
	I0728 15:10:45.493862   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.497502   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:45.497515   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.497521   20887 round_trippers.go:580]     Audit-Id: 2e1bb0fa-3dce-4d85-85f6-c9fe280d5ae6
	I0728 15:10:45.497525   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.497530   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.497535   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.497539   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.497544   20887 round_trippers.go:580]     Content-Length: 1220
	I0728 15:10:45.497548   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.497564   20887 request.go:1073] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"e375d519-9f75-4e2d-8e80-c5ab845c65d4","resourceVersion":"376","creationTimestamp":"2022-07-28T22:06:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-07-28T22:06:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0728 15:10:45.497633   20887 addons.go:153] Setting addon default-storageclass=true in "multinode-20220728150610-12923"
	W0728 15:10:45.497640   20887 addons.go:162] addon default-storageclass should already be in state true
	I0728 15:10:45.497658   20887 host.go:66] Checking if "multinode-20220728150610-12923" exists ...
	I0728 15:10:45.498003   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923 --format={{.State.Status}}
	I0728 15:10:45.498913   20887 node_ready.go:35] waiting up to 6m0s for node "multinode-20220728150610-12923" to be "Ready" ...
	I0728 15:10:45.499491   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:45.499497   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.499503   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.499509   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.502350   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:45.502365   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.502371   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.502382   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.502387   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.502392   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.502397   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.502401   20887 round_trippers.go:580]     Audit-Id: 4ea06635-25b5-497d-91a3-385d0663be03
	I0728 15:10:45.502519   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:45.502783   20887 node_ready.go:49] node "multinode-20220728150610-12923" has status "Ready":"True"
	I0728 15:10:45.502790   20887 node_ready.go:38] duration metric: took 3.845298ms waiting for node "multinode-20220728150610-12923" to be "Ready" ...
	I0728 15:10:45.502795   20887 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:10:45.559770   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:45.566123   20887 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 15:10:45.566138   20887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 15:10:45.566196   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:45.631837   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:45.652586   20887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:10:45.660380   20887 request.go:533] Waited for 157.530398ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:10:45.660410   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:10:45.660415   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.660422   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.660428   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.664791   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:45.664802   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.664807   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.664813   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.664818   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.664824   20887 round_trippers.go:580]     Audit-Id: c0b5ee39-3463-435c-9a96-277f24f38d5d
	I0728 15:10:45.664830   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.664840   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.666678   20887 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"783"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},
"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".": [truncated 84805 chars]
	I0728 15:10:45.668637   20887 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.725012   20887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 15:10:45.798274   20887 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0728 15:10:45.799901   20887 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0728 15:10:45.801689   20887 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0728 15:10:45.803566   20887 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0728 15:10:45.805189   20887 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0728 15:10:45.838090   20887 command_runner.go:130] > pod/storage-provisioner configured
	I0728 15:10:45.860572   20887 request.go:533] Waited for 191.888282ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:45.860622   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:45.860628   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.860636   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.860645   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.863610   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:45.863627   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.863639   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.863648   20887 round_trippers.go:580]     Audit-Id: bfffcf4e-af2e-420b-9dbf-97afa6d03e9c
	I0728 15:10:45.863654   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.863660   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.863665   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.863669   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.863822   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:45.870769   20887 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0728 15:10:45.898314   20887 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0728 15:10:45.957374   20887 addons.go:414] enableAddons completed in 663.85ms
	I0728 15:10:46.060324   20887 request.go:533] Waited for 196.181154ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:46.060378   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:46.060384   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:46.060392   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:46.060401   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:46.062836   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:46.062849   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:46.062857   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:46.062866   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:46.062874   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:46 GMT
	I0728 15:10:46.062879   20887 round_trippers.go:580]     Audit-Id: 3ec4147b-e6b1-4b59-aeef-9aa522b2872e
	I0728 15:10:46.062885   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:46.062892   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:46.063225   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:46.564349   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:46.564369   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:46.564380   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:46.564390   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:46.571082   20887 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0728 15:10:46.571094   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:46.571100   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:46.571105   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:46.571111   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:46.571125   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:46 GMT
	I0728 15:10:46.571135   20887 round_trippers.go:580]     Audit-Id: 7a2fad29-2f14-4cb1-99d9-dfef1a966a57
	I0728 15:10:46.571145   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:46.571207   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:46.571482   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:46.571488   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:46.571496   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:46.571502   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:46.573831   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:46.573846   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:46.573851   20887 round_trippers.go:580]     Audit-Id: 5445bcb3-91f8-4ab6-a40c-94f0d39e9a33
	I0728 15:10:46.573856   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:46.573861   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:46.573866   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:46.573871   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:46.573877   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:46 GMT
	I0728 15:10:46.573931   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:47.065346   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:47.065367   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:47.065383   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:47.065393   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:47.069343   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:47.069357   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:47.069366   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:47 GMT
	I0728 15:10:47.069373   20887 round_trippers.go:580]     Audit-Id: a7e5dba7-7f15-42e0-af1b-9e55264ca985
	I0728 15:10:47.069379   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:47.069386   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:47.069394   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:47.069400   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:47.069484   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:47.069767   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:47.069774   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:47.069780   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:47.069785   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:47.071647   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:47.071657   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:47.071663   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:47.071669   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:47 GMT
	I0728 15:10:47.071673   20887 round_trippers.go:580]     Audit-Id: 31078036-856a-4028-8273-61b2ffed7e95
	I0728 15:10:47.071678   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:47.071683   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:47.071687   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:47.071852   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:47.564076   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:47.564091   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:47.564097   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:47.564103   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:47.566584   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:47.566595   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:47.566601   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:47 GMT
	I0728 15:10:47.566605   20887 round_trippers.go:580]     Audit-Id: 1fd4aacd-16e9-46fe-806a-ab3f0c873488
	I0728 15:10:47.566611   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:47.566631   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:47.566640   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:47.566645   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:47.566906   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:47.567193   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:47.567202   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:47.567208   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:47.567214   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:47.569143   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:47.569155   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:47.569163   20887 round_trippers.go:580]     Audit-Id: 6512e4f8-4ea8-4051-883a-2873047fdc10
	I0728 15:10:47.569169   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:47.569184   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:47.569192   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:47.569199   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:47.569206   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:47 GMT
	I0728 15:10:47.569249   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:48.063713   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:48.063731   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:48.063743   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:48.063761   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:48.067964   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:48.067986   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:48.068004   20887 round_trippers.go:580]     Audit-Id: 3455f0d3-08c2-400b-9fd5-d9660430f312
	I0728 15:10:48.068012   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:48.068017   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:48.068022   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:48.068027   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:48.068032   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:48 GMT
	I0728 15:10:48.068110   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:48.068382   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:48.068388   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:48.068394   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:48.068403   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:48.070364   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:48.070373   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:48.070378   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:48 GMT
	I0728 15:10:48.070383   20887 round_trippers.go:580]     Audit-Id: 2f4d5e2e-f2d4-49d9-9bcd-037ad103dea9
	I0728 15:10:48.070388   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:48.070393   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:48.070398   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:48.070403   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:48.070473   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:48.070670   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:48.563880   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:48.563901   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:48.563914   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:48.563925   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:48.567874   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:48.567890   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:48.567899   20887 round_trippers.go:580]     Audit-Id: b57b7b52-a223-4496-8e53-ee0e35407746
	I0728 15:10:48.567909   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:48.567917   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:48.567926   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:48.567935   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:48.567943   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:48 GMT
	I0728 15:10:48.568073   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:48.568352   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:48.568358   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:48.568363   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:48.568369   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:48.570207   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:48.570216   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:48.570222   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:48.570227   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:48 GMT
	I0728 15:10:48.570234   20887 round_trippers.go:580]     Audit-Id: 2dc9eb29-d22e-4042-b881-dee32e868611
	I0728 15:10:48.570241   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:48.570246   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:48.570250   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:48.570296   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:49.064380   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:49.064406   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:49.064418   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:49.064428   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:49.068365   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:49.068376   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:49.068381   20887 round_trippers.go:580]     Audit-Id: a7596f52-765d-4659-97c6-fa57e3e89f9d
	I0728 15:10:49.068386   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:49.068391   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:49.068395   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:49.068400   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:49.068405   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:49 GMT
	I0728 15:10:49.068465   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:49.068739   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:49.068745   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:49.068751   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:49.068756   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:49.070847   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:49.070857   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:49.070864   20887 round_trippers.go:580]     Audit-Id: 4e38523a-843d-468d-adac-cef9b55c0d1d
	I0728 15:10:49.070870   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:49.070877   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:49.070882   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:49.070887   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:49.070891   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:49 GMT
	I0728 15:10:49.070936   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:49.563984   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:49.563996   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:49.564003   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:49.564008   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:49.566580   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:49.566594   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:49.566600   20887 round_trippers.go:580]     Audit-Id: 3e599aac-819c-4d73-9062-855be48e69d0
	I0728 15:10:49.566605   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:49.566612   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:49.566618   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:49.566623   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:49.566632   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:49 GMT
	I0728 15:10:49.566794   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:49.567073   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:49.567079   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:49.567085   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:49.567091   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:49.569224   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:49.569236   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:49.569248   20887 round_trippers.go:580]     Audit-Id: 45c3ae35-65b3-476e-baa2-6671328b32dc
	I0728 15:10:49.569257   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:49.569266   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:49.569274   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:49.569280   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:49.569287   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:49 GMT
	I0728 15:10:49.569348   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:50.064029   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:50.064057   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:50.064071   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:50.064162   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:50.067702   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:50.067712   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:50.067718   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:50.067723   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:50.067728   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:50 GMT
	I0728 15:10:50.067733   20887 round_trippers.go:580]     Audit-Id: 891cf6a1-55cb-4f7e-a4f8-efba3dcf6b5f
	I0728 15:10:50.067738   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:50.067742   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:50.067794   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:50.068065   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:50.068072   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:50.068084   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:50.068092   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:50.069910   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:50.069921   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:50.069927   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:50 GMT
	I0728 15:10:50.069933   20887 round_trippers.go:580]     Audit-Id: 55185a66-c9d1-44ca-a4bd-225f030b4524
	I0728 15:10:50.069940   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:50.069946   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:50.069952   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:50.069957   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:50.070052   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:50.564469   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:50.564488   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:50.564501   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:50.564511   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:50.568659   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:50.568674   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:50.568681   20887 round_trippers.go:580]     Audit-Id: da1a6ce0-104e-45a1-9cbd-7e060414ad8a
	I0728 15:10:50.568687   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:50.568694   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:50.568706   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:50.568715   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:50.568721   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:50 GMT
	I0728 15:10:50.568809   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:50.569182   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:50.569189   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:50.569194   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:50.569199   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:50.570966   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:50.570976   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:50.570981   20887 round_trippers.go:580]     Audit-Id: 63ce3d7c-2f92-46dc-b367-8905f3940def
	I0728 15:10:50.570986   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:50.570991   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:50.570996   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:50.571001   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:50.571005   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:50 GMT
	I0728 15:10:50.571377   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:50.571562   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:51.064328   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:51.064348   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:51.064360   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:51.064369   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:51.068420   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:51.068436   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:51.068446   20887 round_trippers.go:580]     Audit-Id: 376401c2-0e64-4a8e-b773-e5d947d26d1e
	I0728 15:10:51.068455   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:51.068462   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:51.068469   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:51.068476   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:51.068481   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:51 GMT
	I0728 15:10:51.068607   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:51.068880   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:51.068886   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:51.068892   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:51.068899   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:51.070717   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:51.070726   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:51.070732   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:51.070737   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:51.070746   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:51.070753   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:51 GMT
	I0728 15:10:51.070758   20887 round_trippers.go:580]     Audit-Id: 2a85c656-0db1-4391-add5-e0df95f75da1
	I0728 15:10:51.070763   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:51.070805   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:51.563721   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:51.563741   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:51.563759   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:51.563792   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:51.567554   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:51.567565   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:51.567571   20887 round_trippers.go:580]     Audit-Id: ff14b7d9-0633-4334-ad34-72ce978a4e47
	I0728 15:10:51.567576   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:51.567580   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:51.567585   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:51.567589   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:51.567594   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:51 GMT
	I0728 15:10:51.567669   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:51.567949   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:51.567957   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:51.567963   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:51.567969   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:51.570006   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:51.570015   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:51.570020   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:51.570026   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:51.570031   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:51.570035   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:51 GMT
	I0728 15:10:51.570040   20887 round_trippers.go:580]     Audit-Id: 96d6f515-0eeb-4939-8608-f5f7b3b59615
	I0728 15:10:51.570044   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:51.570354   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:52.065087   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:52.065103   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:52.065111   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:52.065119   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:52.068025   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:52.068038   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:52.068044   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:52 GMT
	I0728 15:10:52.068049   20887 round_trippers.go:580]     Audit-Id: de398890-4918-4b8b-aeb3-fbf181c4aefb
	I0728 15:10:52.068053   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:52.068058   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:52.068066   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:52.068073   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:52.068136   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:52.068453   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:52.068463   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:52.068469   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:52.068475   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:52.070405   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:52.070417   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:52.070425   20887 round_trippers.go:580]     Audit-Id: 091a8c8e-a6ad-40c7-b9dd-5e1743ebbc57
	I0728 15:10:52.070433   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:52.070453   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:52.070459   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:52.070464   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:52.070469   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:52 GMT
	I0728 15:10:52.070594   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:52.563608   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:52.563637   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:52.563649   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:52.563660   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:52.567598   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:52.567612   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:52.567620   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:52.567626   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:52.567634   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:52.567649   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:52 GMT
	I0728 15:10:52.567659   20887 round_trippers.go:580]     Audit-Id: 98fd0d8c-2f09-49b5-8822-f7ee95be0679
	I0728 15:10:52.567667   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:52.567735   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:52.568089   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:52.568096   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:52.568102   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:52.568107   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:52.570000   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:52.570009   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:52.570015   20887 round_trippers.go:580]     Audit-Id: 6b43ddcc-cc23-4644-8813-2f96ac7db4c5
	I0728 15:10:52.570019   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:52.570024   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:52.570028   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:52.570033   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:52.570038   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:52 GMT
	I0728 15:10:52.570081   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:53.063594   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:53.063610   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:53.063619   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:53.063625   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:53.066652   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:53.066663   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:53.066669   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:53 GMT
	I0728 15:10:53.066674   20887 round_trippers.go:580]     Audit-Id: 696bd6bf-bcaa-4c1c-9050-7a5f8403d797
	I0728 15:10:53.066678   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:53.066683   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:53.066693   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:53.066699   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:53.066764   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:53.067050   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:53.067060   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:53.067066   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:53.067071   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:53.069403   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:53.069414   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:53.069425   20887 round_trippers.go:580]     Audit-Id: c0814ef9-ca94-45aa-a1d4-fee9edcf1cc8
	I0728 15:10:53.069431   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:53.069436   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:53.069440   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:53.069445   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:53.069450   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:53 GMT
	I0728 15:10:53.069499   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:53.069700   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:53.564180   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:53.564195   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:53.564204   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:53.564211   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:53.566995   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:53.567005   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:53.567010   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:53.567015   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:53.567019   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:53.567030   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:53.567035   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:53 GMT
	I0728 15:10:53.567041   20887 round_trippers.go:580]     Audit-Id: 182cf1a6-0466-49c9-881b-b3f5c7e8fc03
	I0728 15:10:53.567095   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:53.567367   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:53.567373   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:53.567379   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:53.567384   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:53.569437   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:53.569448   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:53.569455   20887 round_trippers.go:580]     Audit-Id: 1ea318d5-100c-41bc-825b-7d5fea38ac96
	I0728 15:10:53.569464   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:53.569471   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:53.569478   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:53.569509   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:53.569515   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:53 GMT
	I0728 15:10:53.569569   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:54.063677   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:54.063698   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:54.063709   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:54.063719   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:54.068113   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:54.068125   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:54.068130   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:54.068136   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:54 GMT
	I0728 15:10:54.068140   20887 round_trippers.go:580]     Audit-Id: 1a4914f4-45c7-4f15-a330-41eb588be503
	I0728 15:10:54.068145   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:54.068150   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:54.068154   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:54.068207   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:54.068489   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:54.068496   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:54.068501   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:54.068506   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:54.070725   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:54.070738   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:54.070743   20887 round_trippers.go:580]     Audit-Id: b15acedf-205b-4868-b22a-185cfedda8cb
	I0728 15:10:54.070748   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:54.070773   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:54.070799   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:54.070821   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:54.070830   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:54 GMT
	I0728 15:10:54.071185   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:54.564629   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:54.564642   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:54.564648   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:54.564653   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:54.567002   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:54.567012   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:54.567019   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:54 GMT
	I0728 15:10:54.567024   20887 round_trippers.go:580]     Audit-Id: 57164ac5-9d40-4df5-adb7-cd60fff62568
	I0728 15:10:54.567029   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:54.567033   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:54.567040   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:54.567051   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:54.567326   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:54.567607   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:54.567615   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:54.567624   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:54.567629   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:54.569904   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:54.569914   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:54.569920   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:54.569925   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:54.569930   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:54 GMT
	I0728 15:10:54.569934   20887 round_trippers.go:580]     Audit-Id: 244e7023-ed99-44fd-816a-6cfc58455073
	I0728 15:10:54.569939   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:54.569945   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:54.569988   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:55.064265   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:55.064290   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:55.064303   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:55.064312   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:55.069220   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:55.069230   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:55.069239   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:55.069244   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:55.069249   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:55 GMT
	I0728 15:10:55.069253   20887 round_trippers.go:580]     Audit-Id: a9e475b4-43a6-40cb-a7ea-65a0cbb15f52
	I0728 15:10:55.069258   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:55.069263   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:55.069316   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:55.069589   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:55.069595   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:55.069602   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:55.069607   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:55.071993   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:55.072003   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:55.072011   20887 round_trippers.go:580]     Audit-Id: 8954296d-0f98-4d41-bd10-71e4c1d38303
	I0728 15:10:55.072017   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:55.072021   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:55.072026   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:55.072030   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:55.072035   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:55 GMT
	I0728 15:10:55.072206   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:55.072398   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:55.563601   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:55.563613   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:55.563620   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:55.563625   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:55.566313   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:55.566324   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:55.566329   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:55.566334   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:55.566338   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:55.566347   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:55.566352   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:55 GMT
	I0728 15:10:55.566356   20887 round_trippers.go:580]     Audit-Id: f00291f9-0e26-49bf-a235-bb706ec59210
	I0728 15:10:55.566469   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:55.566760   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:55.566766   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:55.566773   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:55.566778   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:55.568951   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:55.568962   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:55.568973   20887 round_trippers.go:580]     Audit-Id: e4706492-54b0-429b-8aeb-c3ca5504a6f8
	I0728 15:10:55.568984   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:55.568991   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:55.568997   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:55.569005   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:55.569011   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:55 GMT
	I0728 15:10:55.569063   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:56.063706   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:56.063727   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:56.063738   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:56.063749   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:56.067741   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:56.067763   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:56.067772   20887 round_trippers.go:580]     Audit-Id: 9cd602cb-19ec-420e-946c-b3ab0ecd4546
	I0728 15:10:56.067780   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:56.067789   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:56.067797   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:56.067803   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:56.067812   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:56 GMT
	I0728 15:10:56.068084   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:56.068365   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:56.068371   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:56.068379   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:56.068384   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:56.070398   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:56.070407   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:56.070412   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:56.070417   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:56.070422   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:56 GMT
	I0728 15:10:56.070426   20887 round_trippers.go:580]     Audit-Id: 59bc3c86-d0f7-4f43-ab1f-8be270319127
	I0728 15:10:56.070431   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:56.070436   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:56.070478   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:56.564318   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:56.564333   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:56.564342   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:56.564349   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:56.567635   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:56.567645   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:56.567650   20887 round_trippers.go:580]     Audit-Id: 1e6cc6ba-0a60-4ebc-845e-c11539317e3b
	I0728 15:10:56.567656   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:56.567661   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:56.567665   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:56.567670   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:56.567674   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:56 GMT
	I0728 15:10:56.567799   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:56.568065   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:56.568072   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:56.568078   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:56.568083   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:56.570006   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:56.570015   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:56.570021   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:56.570026   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:56.570033   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:56.570039   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:56 GMT
	I0728 15:10:56.570044   20887 round_trippers.go:580]     Audit-Id: 63e2158d-0c0c-4a2f-9059-aeb4a66c78dd
	I0728 15:10:56.570075   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:56.570331   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:57.063574   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:57.063590   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:57.063599   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:57.063606   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:57.067040   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:57.067056   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:57.067065   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:57.067070   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:57.067075   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:57 GMT
	I0728 15:10:57.067079   20887 round_trippers.go:580]     Audit-Id: 5e15ec86-3a98-4ac6-8fe6-c1ce9daa3078
	I0728 15:10:57.067086   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:57.067091   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:57.067147   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:57.067426   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:57.067432   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:57.067438   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:57.067443   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:57.069502   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:57.069515   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:57.069524   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:57.069535   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:57.069544   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:57.069549   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:57.069559   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:57 GMT
	I0728 15:10:57.069566   20887 round_trippers.go:580]     Audit-Id: f6a7388d-204b-463c-adfe-5bc71b031143
	I0728 15:10:57.069826   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:57.563802   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:57.563817   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:57.563826   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:57.563833   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:57.566788   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:57.566799   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:57.566805   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:57.566810   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:57 GMT
	I0728 15:10:57.566815   20887 round_trippers.go:580]     Audit-Id: 58a2162e-4e9e-4d0b-92ff-1c0efe90a984
	I0728 15:10:57.566819   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:57.566825   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:57.566830   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:57.566886   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:57.567161   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:57.567167   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:57.567173   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:57.567178   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:57.569129   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:57.569138   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:57.569144   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:57.569151   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:57.569161   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:57.569172   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:57 GMT
	I0728 15:10:57.569181   20887 round_trippers.go:580]     Audit-Id: 2556eec1-d60a-4386-a02b-a2d1c7d72944
	I0728 15:10:57.569198   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:57.569390   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:57.569588   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:58.063656   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:58.063677   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:58.063692   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:58.063702   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:58.067007   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:58.067018   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:58.067026   20887 round_trippers.go:580]     Audit-Id: e90d48dc-b18b-42c3-bf96-c942f39fb014
	I0728 15:10:58.067032   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:58.067037   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:58.067041   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:58.067046   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:58.067051   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:58 GMT
	I0728 15:10:58.067324   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:58.067606   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:58.067612   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:58.067618   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:58.067623   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:58.069541   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:58.069549   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:58.069555   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:58 GMT
	I0728 15:10:58.069561   20887 round_trippers.go:580]     Audit-Id: 69b991a0-0e32-4e72-b32a-b4dc8808896d
	I0728 15:10:58.069567   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:58.069571   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:58.069576   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:58.069580   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:58.069621   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:58.564048   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:58.564066   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:58.564078   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:58.564088   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:58.568006   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:58.568019   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:58.568030   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:58.568038   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:58 GMT
	I0728 15:10:58.568044   20887 round_trippers.go:580]     Audit-Id: c726d51a-6c18-4ffb-846d-3189bb43ff03
	I0728 15:10:58.568052   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:58.568058   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:58.568064   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:58.568131   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:58.568502   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:58.568511   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:58.568519   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:58.568526   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:58.570526   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:58.570535   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:58.570541   20887 round_trippers.go:580]     Audit-Id: 8f2fd8b4-67fd-43e2-98ac-0bd66c8c473d
	I0728 15:10:58.570546   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:58.570551   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:58.570555   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:58.570563   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:58.570568   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:58 GMT
	I0728 15:10:58.570608   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:59.064586   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:59.064608   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:59.064620   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:59.064630   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:59.068812   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:59.068824   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:59.068851   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:59 GMT
	I0728 15:10:59.068856   20887 round_trippers.go:580]     Audit-Id: 54a273ad-00fa-43bd-b652-d36c772033a6
	I0728 15:10:59.068874   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:59.068882   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:59.068887   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:59.068893   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:59.068992   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:59.069308   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:59.069316   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:59.069322   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:59.069327   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:59.071878   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:59.071887   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:59.071892   20887 round_trippers.go:580]     Audit-Id: 48457703-3a07-4508-bac0-6f42f81cf1c9
	I0728 15:10:59.071899   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:59.071904   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:59.071908   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:59.071913   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:59.071918   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:59 GMT
	I0728 15:10:59.071956   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:59.563500   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:59.563525   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:59.563538   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:59.563548   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:59.567358   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:59.567371   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:59.567377   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:59.567382   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:59.567391   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:59.567396   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:59.567401   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:59 GMT
	I0728 15:10:59.567405   20887 round_trippers.go:580]     Audit-Id: d671048b-df86-467d-a9f5-ab4b2b366f5b
	I0728 15:10:59.567460   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:59.567774   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:59.567781   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:59.567787   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:59.567791   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:59.571832   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:59.571843   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:59.571855   20887 round_trippers.go:580]     Audit-Id: be0ca3d0-a09c-4de2-adda-63994a4845fe
	I0728 15:10:59.571864   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:59.571871   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:59.571879   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:59.571886   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:59.571893   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:59 GMT
	I0728 15:10:59.572213   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:59.572398   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:11:00.063722   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:00.063742   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:00.063755   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:00.063764   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:00.068021   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:00.068033   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:00.068044   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:00.068050   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:00.068056   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:00 GMT
	I0728 15:11:00.068063   20887 round_trippers.go:580]     Audit-Id: 61cc8825-38ee-4eea-8d17-a091365286c2
	I0728 15:11:00.068070   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:00.068075   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:00.068143   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:00.068411   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:00.068417   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:00.068423   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:00.068428   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:00.070157   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:00.070168   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:00.070174   20887 round_trippers.go:580]     Audit-Id: 3c64c012-16ac-4cb0-80bd-eda6ee2a0c91
	I0728 15:11:00.070180   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:00.070187   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:00.070194   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:00.070199   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:00.070203   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:00 GMT
	I0728 15:11:00.070261   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:00.564005   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:00.564019   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:00.564029   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:00.564036   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:00.567328   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:00.567339   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:00.567344   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:00 GMT
	I0728 15:11:00.567349   20887 round_trippers.go:580]     Audit-Id: bef46ae1-ebb1-48e2-80df-87832a8dd3df
	I0728 15:11:00.567353   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:00.567361   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:00.567366   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:00.567370   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:00.567481   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:00.567747   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:00.567753   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:00.567759   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:00.567764   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:00.569567   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:00.569581   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:00.569588   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:00.569595   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:00.569601   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:00 GMT
	I0728 15:11:00.569605   20887 round_trippers.go:580]     Audit-Id: 462e3cc5-9e8c-4d9a-891a-a5af21ed80c5
	I0728 15:11:00.569609   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:00.569619   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:00.569668   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:01.063829   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:01.063856   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:01.063869   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:01.063880   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:01.067950   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:01.067966   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:01.067973   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:01.067980   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:01.067987   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:01.067993   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:01.068004   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:01 GMT
	I0728 15:11:01.068011   20887 round_trippers.go:580]     Audit-Id: 3c30da17-24e9-44f8-9e77-98f6a9bc17ab
	I0728 15:11:01.068096   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:01.068372   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:01.068380   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:01.068386   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:01.068393   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:01.070302   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:01.070311   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:01.070316   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:01.070321   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:01.070326   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:01.070330   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:01.070335   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:01 GMT
	I0728 15:11:01.070340   20887 round_trippers.go:580]     Audit-Id: b998cadf-5c8f-4db9-bfb2-70f39938ccdc
	I0728 15:11:01.070388   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:01.563562   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:01.563577   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:01.563586   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:01.563593   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:01.566687   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:01.566700   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:01.566706   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:01.566711   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:01.566716   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:01.566722   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:01 GMT
	I0728 15:11:01.566726   20887 round_trippers.go:580]     Audit-Id: 7d932a15-7127-41a7-b3fc-9f93359013eb
	I0728 15:11:01.566731   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:01.566899   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:01.567191   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:01.567197   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:01.567203   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:01.567208   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:01.569126   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:01.569136   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:01.569142   20887 round_trippers.go:580]     Audit-Id: 5ccc4432-5074-4a6b-b64f-62fc964c2a72
	I0728 15:11:01.569151   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:01.569157   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:01.569163   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:01.569168   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:01.569173   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:01 GMT
	I0728 15:11:01.569224   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:02.065549   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:02.065571   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:02.065583   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:02.065593   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:02.069164   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:02.069174   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:02.069180   20887 round_trippers.go:580]     Audit-Id: 0a2dcc2c-53c6-47f6-9847-31d585b9b6b8
	I0728 15:11:02.069184   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:02.069189   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:02.069194   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:02.069199   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:02.069203   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:02 GMT
	I0728 15:11:02.069253   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:02.069542   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:02.069548   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:02.069554   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:02.069559   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:02.071618   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:02.071627   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:02.071633   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:02.071637   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:02 GMT
	I0728 15:11:02.071642   20887 round_trippers.go:580]     Audit-Id: 26ad4969-b63d-4db6-84d0-7646fb99ee51
	I0728 15:11:02.071647   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:02.071652   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:02.071656   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:02.071696   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:02.071897   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:11:02.564759   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:02.564775   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:02.564783   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:02.564794   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:02.567870   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:02.567883   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:02.567891   20887 round_trippers.go:580]     Audit-Id: ba5c0f67-b6b7-44f2-abd6-5d2f9d32543c
	I0728 15:11:02.567897   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:02.567904   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:02.567908   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:02.567913   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:02.567918   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:02 GMT
	I0728 15:11:02.568114   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:02.568399   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:02.568406   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:02.568412   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:02.568418   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:02.570398   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:02.570407   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:02.570412   20887 round_trippers.go:580]     Audit-Id: 040c8970-aea2-42e1-a245-ed3bd8addaca
	I0728 15:11:02.570419   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:02.570428   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:02.570443   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:02.570452   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:02.570457   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:02 GMT
	I0728 15:11:02.570658   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:03.064192   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:03.064213   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:03.064225   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:03.064234   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:03.068874   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:03.068888   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:03.068894   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:03.068900   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:03 GMT
	I0728 15:11:03.068904   20887 round_trippers.go:580]     Audit-Id: f9ea7d9d-892a-438f-87a8-28c0f2936263
	I0728 15:11:03.068909   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:03.068914   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:03.068919   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:03.068972   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:03.069259   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:03.069266   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:03.069272   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:03.069277   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:03.071236   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:03.071245   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:03.071253   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:03.071259   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:03.071271   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:03.071276   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:03 GMT
	I0728 15:11:03.071281   20887 round_trippers.go:580]     Audit-Id: 9fa802be-2768-4cac-a516-c7ef1678838c
	I0728 15:11:03.071285   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:03.071326   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:03.563632   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:03.563653   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:03.563665   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:03.563675   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:03.567428   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:03.567440   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:03.567445   20887 round_trippers.go:580]     Audit-Id: e09ce4d5-51dd-4288-9ee0-745a371b807e
	I0728 15:11:03.567450   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:03.567455   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:03.567459   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:03.567464   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:03.567468   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:03 GMT
	I0728 15:11:03.567525   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:03.567796   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:03.567803   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:03.567808   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:03.567813   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:03.569689   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:03.569699   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:03.569705   20887 round_trippers.go:580]     Audit-Id: 67bb063a-3b65-4f8f-b88f-f32ce3ef38fe
	I0728 15:11:03.569710   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:03.569715   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:03.569719   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:03.569746   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:03.569767   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:03 GMT
	I0728 15:11:03.570056   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:04.065237   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:04.065262   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:04.065274   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:04.065329   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:04.069390   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:04.069405   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:04.069413   20887 round_trippers.go:580]     Audit-Id: a52bf9d7-c41b-4053-acfe-fc95b81c040f
	I0728 15:11:04.069420   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:04.069427   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:04.069433   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:04.069440   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:04.069446   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:04 GMT
	I0728 15:11:04.069531   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:04.069867   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:04.069873   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:04.069879   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:04.069884   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:04.072055   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:04.072064   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:04.072070   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:04.072085   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:04.072093   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:04.072098   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:04 GMT
	I0728 15:11:04.072103   20887 round_trippers.go:580]     Audit-Id: ae8e301e-7248-44d0-a7d3-800f1001d802
	I0728 15:11:04.072109   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:04.072159   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:04.072348   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:11:04.563509   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:04.563536   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:04.563587   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:04.563599   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:04.567314   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:04.567326   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:04.567332   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:04.567336   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:04 GMT
	I0728 15:11:04.567340   20887 round_trippers.go:580]     Audit-Id: 3647f750-0959-4be7-8fae-e57918b64a2a
	I0728 15:11:04.567345   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:04.567350   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:04.567354   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:04.567407   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:04.567687   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:04.567693   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:04.567699   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:04.567704   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:04.569743   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:04.569753   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:04.569758   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:04 GMT
	I0728 15:11:04.569763   20887 round_trippers.go:580]     Audit-Id: 995f55b3-2054-4486-8c9d-aa2e935ef09a
	I0728 15:11:04.569767   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:04.569772   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:04.569777   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:04.569781   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:04.569898   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:05.065003   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:05.065028   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:05.065039   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:05.065049   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:05.069735   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:05.069748   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:05.069755   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:05.069759   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:05.069764   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:05 GMT
	I0728 15:11:05.069769   20887 round_trippers.go:580]     Audit-Id: 7a444741-4d37-4095-bc47-b861813b8cd1
	I0728 15:11:05.069773   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:05.069778   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:05.069834   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:05.070116   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:05.070124   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:05.070130   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:05.070135   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:05.072231   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:05.072250   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:05.072276   20887 round_trippers.go:580]     Audit-Id: 15e863d8-854e-4c8b-969b-2a1156135b26
	I0728 15:11:05.072284   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:05.072289   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:05.072299   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:05.072305   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:05.072309   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:05 GMT
	I0728 15:11:05.072440   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:05.563448   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:05.563461   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:05.563468   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:05.563473   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:05.566374   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:05.566386   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:05.566392   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:05 GMT
	I0728 15:11:05.566397   20887 round_trippers.go:580]     Audit-Id: 202148f5-099b-4e75-be1d-c99656e5be90
	I0728 15:11:05.566402   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:05.566406   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:05.566412   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:05.566417   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:05.566489   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:05.566768   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:05.566774   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:05.566780   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:05.566785   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:05.568956   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:05.568967   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:05.568974   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:05.568981   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:05.568992   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:05 GMT
	I0728 15:11:05.569007   20887 round_trippers.go:580]     Audit-Id: 79e8b229-9b93-4109-9c97-ea2827cb22e8
	I0728 15:11:05.569024   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:05.569030   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:05.569079   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.063520   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:06.063542   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.063555   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.063566   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.067292   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:06.067303   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.067311   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.067318   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.067325   20887 round_trippers.go:580]     Audit-Id: 3c84ee11-5706-4724-8d6b-603095cb35d2
	I0728 15:11:06.067331   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.067352   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.067360   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.067590   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:06.067870   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.067877   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.067883   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.067888   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.069924   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:06.069933   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.069938   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.069943   20887 round_trippers.go:580]     Audit-Id: 224153b4-5234-475d-98d6-dd85b9183abb
	I0728 15:11:06.069950   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.069954   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.069959   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.069963   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.070016   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.563645   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:06.563666   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.563678   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.563689   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.567856   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:06.567872   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.567880   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.567886   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.567895   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.567904   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.567910   20887 round_trippers.go:580]     Audit-Id: 946626db-6b5a-4298-aff0-1a358e9eefd6
	I0728 15:11:06.567917   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.567993   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"797","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6189 chars]
	I0728 15:11:06.568310   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.568316   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.568321   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.568327   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.570213   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.570222   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.570228   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.570235   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.570246   20887 round_trippers.go:580]     Audit-Id: 2ff213e9-ff02-4c3b-90ba-15b56d031bcf
	I0728 15:11:06.570258   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.570271   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.570280   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.570510   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.570691   20887 pod_ready.go:92] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:06.570701   20887 pod_ready.go:81] duration metric: took 20.902231055s waiting for pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.570707   20887 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.570733   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/etcd-multinode-20220728150610-12923
	I0728 15:11:06.570737   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.570743   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.570748   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.572702   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.572710   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.572715   20887 round_trippers.go:580]     Audit-Id: 5837a8d2-e495-4e85-9654-85a42ae50c5d
	I0728 15:11:06.572724   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.572731   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.572737   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.572742   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.572746   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.573005   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220728150610-12923","namespace":"kube-system","uid":"2d4683e7-2e93-41c5-af51-5181a7c29edd","resourceVersion":"730","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"86483540902094820b1a0cc29f31f22f","kubernetes.io/config.mirror":"86483540902094820b1a0cc29f31f22f","kubernetes.io/config.seen":"2022-07-28T22:06:37.255020292Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fi
eldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io [truncated 6050 chars]
	I0728 15:11:06.573258   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.573265   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.573273   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.573280   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.575044   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.575052   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.575058   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.575063   20887 round_trippers.go:580]     Audit-Id: 42358bb8-3946-447e-a113-b72b4d2be218
	I0728 15:11:06.575067   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.575072   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.575076   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.575081   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.575127   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.575336   20887 pod_ready.go:92] pod "etcd-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:06.575343   20887 pod_ready.go:81] duration metric: took 4.630782ms waiting for pod "etcd-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.575356   20887 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.575384   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220728150610-12923
	I0728 15:11:06.575388   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.575394   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.575399   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.577088   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.577096   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.577102   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.577106   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.577111   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.577116   20887 round_trippers.go:580]     Audit-Id: b3ba3005-c591-4241-bf4b-47ec3a215d2d
	I0728 15:11:06.577121   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.577125   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.577190   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220728150610-12923","namespace":"kube-system","uid":"34425f5f-5cbc-4e7c-89b3-e4758c44f162","resourceVersion":"727","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"f17cd4a02884221436b424aa5c9008ee","kubernetes.io/config.mirror":"f17cd4a02884221436b424aa5c9008ee","kubernetes.io/config.seen":"2022-07-28T22:06:37.255021189Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z",
"fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".": [truncated 8517 chars]
	I0728 15:11:06.577440   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.577446   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.577452   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.577457   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.578986   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.578997   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.579003   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.579009   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.579013   20887 round_trippers.go:580]     Audit-Id: 3307727b-5d95-4468-b7d8-8329583149bb
	I0728 15:11:06.579018   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.579022   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.579049   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.579271   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.579439   20887 pod_ready.go:92] pod "kube-apiserver-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:06.579444   20887 pod_ready.go:81] duration metric: took 4.082874ms waiting for pod "kube-apiserver-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.579450   20887 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.579472   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:11:06.579476   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.579481   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.579486   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.581318   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.581327   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.581332   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.581343   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.581349   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.581354   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.581359   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.581364   20887 round_trippers.go:580]     Audit-Id: 0ca7ffc2-ef2c-4cc7-b3ce-2f71c934b530
	I0728 15:11:06.581672   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"778","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8090 chars]
	I0728 15:11:06.581935   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.581942   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.581948   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.581953   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.583745   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.583754   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.583759   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.583764   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.583769   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.583774   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.583779   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.583790   20887 round_trippers.go:580]     Audit-Id: 09adf87b-2d67-4e01-a725-d038f8d9ee1d
	I0728 15:11:06.583836   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.584018   20887 pod_ready.go:92] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:06.584024   20887 pod_ready.go:81] duration metric: took 4.570056ms waiting for pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.584029   20887 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bxdk6" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.584049   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-bxdk6
	I0728 15:11:06.584053   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.584059   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.584064   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.585706   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.585714   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.585719   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.585724   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.585729   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.585733   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.585737   20887 round_trippers.go:580]     Audit-Id: 31cfff38-39b4-41b8-8384-f4329b95e87f
	I0728 15:11:06.585742   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.585786   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bxdk6","generateName":"kube-proxy-","namespace":"kube-system","uid":"befca8fa-aef6-415a-b033-8522067db320","resourceVersion":"474","creationTimestamp":"2022-07-28T22:07:45Z","labels":{"controller-revision-hash":"94985b49","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:07:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5548 chars]
	I0728 15:11:06.586022   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923-m02
	I0728 15:11:06.586028   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.586034   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.586039   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.587472   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.587481   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.587486   20887 round_trippers.go:580]     Audit-Id: 19724d84-538d-4c14-aeb8-b2098d890ee9
	I0728 15:11:06.587490   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.587494   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.587499   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.587503   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.587508   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.587661   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923-m02","uid":"2d4cf78c-ed12-4ef8-9967-e85ae2ffd232","resourceVersion":"556","creationTimestamp":"2022-07-28T22:07:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:07:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:07:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4523 chars]
	I0728 15:11:06.587815   20887 pod_ready.go:92] pod "kube-proxy-bxdk6" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:06.587821   20887 pod_ready.go:81] duration metric: took 3.78832ms waiting for pod "kube-proxy-bxdk6" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.587826   20887 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cdz7z" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.765733   20887 request.go:533] Waited for 177.863354ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-cdz7z
	I0728 15:11:06.765828   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-cdz7z
	I0728 15:11:06.765835   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.765935   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.765947   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.770071   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:06.770089   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.770100   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.770112   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.770125   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.770142   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.770153   20887 round_trippers.go:580]     Audit-Id: fc0edabf-73f5-416f-9d1e-c8a53efe45d1
	I0728 15:11:06.770162   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.770252   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cdz7z","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9727653-ed51-43f3-95ad-fd2f5fb0ac6e","resourceVersion":"704","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"controller-revision-hash":"94985b49","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5747 chars]
	I0728 15:11:06.963742   20887 request.go:533] Waited for 193.14489ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.963807   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.963816   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.963827   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.963840   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.967895   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:06.967910   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.967917   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.967944   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.967954   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.967961   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.967967   20887 round_trippers.go:580]     Audit-Id: 5a0598f5-f32a-42b3-b08f-2a44ad156d4f
	I0728 15:11:06.967973   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.968037   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.968331   20887 pod_ready.go:92] pod "kube-proxy-cdz7z" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:06.968338   20887 pod_ready.go:81] duration metric: took 380.511561ms waiting for pod "kube-proxy-cdz7z" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.968344   20887 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cn9x2" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:07.163824   20887 request.go:533] Waited for 195.350893ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-cn9x2
	I0728 15:11:07.163881   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-cn9x2
	I0728 15:11:07.163889   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:07.163901   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:07.163912   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:07.168887   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:07.168898   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:07.168904   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:07.168914   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:07.168920   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:07 GMT
	I0728 15:11:07.168924   20887 round_trippers.go:580]     Audit-Id: 9c91f87b-70d9-45f6-8683-984c661379d0
	I0728 15:11:07.168929   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:07.168933   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:07.170114   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cn9x2","generateName":"kube-proxy-","namespace":"kube-system","uid":"813dc8a0-2ea3-4ee9-83ce-fe09ccf38295","resourceVersion":"671","creationTimestamp":"2022-07-28T22:08:39Z","labels":{"controller-revision-hash":"94985b49","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:08:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5755 chars]
	I0728 15:11:07.364100   20887 request.go:533] Waited for 193.661665ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923-m03
	I0728 15:11:07.364152   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923-m03
	I0728 15:11:07.364160   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:07.364257   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:07.364271   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:07.368521   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:07.368536   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:07.368544   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:07 GMT
	I0728 15:11:07.368550   20887 round_trippers.go:580]     Audit-Id: 76694b8b-8a83-4975-82a5-3519e8d5a51f
	I0728 15:11:07.368561   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:07.368584   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:07.368595   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:07.368601   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:07.368867   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923-m03","uid":"705fe4c5-d194-48b6-83d4-926ad5fead86","resourceVersion":"686","creationTimestamp":"2022-07-28T22:09:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:09:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:09:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4340 chars]
	I0728 15:11:07.369120   20887 pod_ready.go:92] pod "kube-proxy-cn9x2" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:07.369126   20887 pod_ready.go:81] duration metric: took 400.782058ms waiting for pod "kube-proxy-cn9x2" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:07.369132   20887 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:07.565324   20887 request.go:533] Waited for 196.139418ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220728150610-12923
	I0728 15:11:07.565387   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220728150610-12923
	I0728 15:11:07.565401   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:07.565415   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:07.565426   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:07.569519   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:07.569533   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:07.569540   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:07.569547   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:07.569554   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:07 GMT
	I0728 15:11:07.569560   20887 round_trippers.go:580]     Audit-Id: ed5b44d1-23a1-464f-b4e2-89d12aa4333d
	I0728 15:11:07.569567   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:07.569578   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:07.569646   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220728150610-12923","namespace":"kube-system","uid":"ef5d84ce-4249-4af0-b1be-7a3d7f8c2205","resourceVersion":"742","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"164dd1e1cbdc7905e66f2af11f156d06","kubernetes.io/config.mirror":"164dd1e1cbdc7905e66f2af11f156d06","kubernetes.io/config.seen":"2022-07-28T22:06:37.255019449Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernete
s.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i [truncated 4974 chars]
	I0728 15:11:07.765682   20887 request.go:533] Waited for 195.39648ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:07.765713   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:07.765717   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:07.765723   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:07.765728   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:07.768657   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:07.768668   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:07.768674   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:07 GMT
	I0728 15:11:07.768678   20887 round_trippers.go:580]     Audit-Id: b9f785af-8ceb-49b1-96e3-e5b38fb92ac1
	I0728 15:11:07.768684   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:07.768688   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:07.768693   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:07.768698   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:07.768750   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:07.768943   20887 pod_ready.go:92] pod "kube-scheduler-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:07.768949   20887 pod_ready.go:81] duration metric: took 399.816045ms waiting for pod "kube-scheduler-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:07.768956   20887 pod_ready.go:38] duration metric: took 22.266341127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:11:07.768970   20887 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:11:07.769018   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:11:07.777756   20887 command_runner.go:130] > 1658
	I0728 15:11:07.778465   20887 api_server.go:71] duration metric: took 22.485150166s to wait for apiserver process to appear ...
	I0728 15:11:07.778478   20887 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:11:07.778485   20887 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0728 15:11:07.783449   20887 api_server.go:266] https://127.0.0.1:56607/healthz returned 200:
	ok
	I0728 15:11:07.783478   20887 round_trippers.go:463] GET https://127.0.0.1:56607/version
	I0728 15:11:07.783482   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:07.783489   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:07.783495   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:07.784562   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:07.784572   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:07.784577   20887 round_trippers.go:580]     Audit-Id: 429bd255-1804-4dd6-bd12-f35136aeb1c7
	I0728 15:11:07.784582   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:07.784587   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:07.784592   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:07.784596   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:07.784600   20887 round_trippers.go:580]     Content-Length: 263
	I0728 15:11:07.784607   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:07 GMT
	I0728 15:11:07.784656   20887 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "24",
	  "gitVersion": "v1.24.3",
	  "gitCommit": "aef86a93758dc3cb2c658dd9657ab4ad4afc21cb",
	  "gitTreeState": "clean",
	  "buildDate": "2022-07-13T14:23:26Z",
	  "goVersion": "go1.18.3",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0728 15:11:07.784682   20887 api_server.go:140] control plane version: v1.24.3
	I0728 15:11:07.784688   20887 api_server.go:130] duration metric: took 6.205662ms to wait for apiserver health ...
	I0728 15:11:07.784692   20887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:11:07.963798   20887 request.go:533] Waited for 179.065632ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:11:07.963873   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:11:07.963881   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:07.963892   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:07.963904   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:07.969192   20887 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0728 15:11:07.969203   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:07.969211   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:07.969218   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:07.969223   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:07.969228   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:07.969234   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:07 GMT
	I0728 15:11:07.969242   20887 round_trippers.go:580]     Audit-Id: 24815efd-3433-4082-8bfe-d6b5780c1657
	I0728 15:11:07.970066   20887 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"801"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"797","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},
"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".": [truncated 84989 chars]
	I0728 15:11:07.971935   20887 system_pods.go:59] 12 kube-system pods found
	I0728 15:11:07.971945   20887 system_pods.go:61] "coredns-6d4b75cb6d-dfxk7" [ea8a6018-c281-45ec-bbb7-19f2988aa884] Running
	I0728 15:11:07.971950   20887 system_pods.go:61] "etcd-multinode-20220728150610-12923" [2d4683e7-2e93-41c5-af51-5181a7c29edd] Running
	I0728 15:11:07.971953   20887 system_pods.go:61] "kindnet-52mvf" [ef5b2400-09e0-4d0c-98b9-d520fd42e827] Running
	I0728 15:11:07.971958   20887 system_pods.go:61] "kindnet-tlp2m" [b535556f-fbbe-4220-9037-5016f1b8fb51] Running
	I0728 15:11:07.971961   20887 system_pods.go:61] "kindnet-v5hq8" [3410f1c1-9947-4a08-8503-660caf65dc5c] Running
	I0728 15:11:07.971965   20887 system_pods.go:61] "kube-apiserver-multinode-20220728150610-12923" [34425f5f-5cbc-4e7c-89b3-e4758c44f162] Running
	I0728 15:11:07.971969   20887 system_pods.go:61] "kube-controller-manager-multinode-20220728150610-12923" [92841ab9-f773-435e-a133-794e0d8e0cef] Running
	I0728 15:11:07.971973   20887 system_pods.go:61] "kube-proxy-bxdk6" [befca8fa-aef6-415a-b033-8522067db320] Running
	I0728 15:11:07.971977   20887 system_pods.go:61] "kube-proxy-cdz7z" [f9727653-ed51-43f3-95ad-fd2f5fb0ac6e] Running
	I0728 15:11:07.971981   20887 system_pods.go:61] "kube-proxy-cn9x2" [813dc8a0-2ea3-4ee9-83ce-fe09ccf38295] Running
	I0728 15:11:07.971985   20887 system_pods.go:61] "kube-scheduler-multinode-20220728150610-12923" [ef5d84ce-4249-4af0-b1be-7a3d7f8c2205] Running
	I0728 15:11:07.971990   20887 system_pods.go:61] "storage-provisioner" [29238934-2c0b-4262-80ff-12975d44a715] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 15:11:07.971993   20887 system_pods.go:74] duration metric: took 187.299378ms to wait for pod list to return data ...
	I0728 15:11:07.971997   20887 default_sa.go:34] waiting for default service account to be created ...
	I0728 15:11:08.164659   20887 request.go:533] Waited for 192.538868ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/default/serviceaccounts
	I0728 15:11:08.164703   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/default/serviceaccounts
	I0728 15:11:08.164711   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:08.164722   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:08.164733   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:08.168638   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:08.168656   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:08.168664   20887 round_trippers.go:580]     Audit-Id: 56225654-3f8d-4ee0-a172-4263a275cd06
	I0728 15:11:08.168671   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:08.168679   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:08.168686   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:08.168693   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:08.168698   20887 round_trippers.go:580]     Content-Length: 261
	I0728 15:11:08.168726   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:08 GMT
	I0728 15:11:08.168747   20887 request.go:1073] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"801"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d9af6ce9-3a6c-49bf-9e9c-59cda36b759c","resourceVersion":"306","creationTimestamp":"2022-07-28T22:06:49Z"}}]}
	I0728 15:11:08.168903   20887 default_sa.go:45] found service account: "default"
	I0728 15:11:08.168912   20887 default_sa.go:55] duration metric: took 196.912334ms for default service account to be created ...
	I0728 15:11:08.168918   20887 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 15:11:08.365709   20887 request.go:533] Waited for 196.734109ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:11:08.365797   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:11:08.365805   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:08.365845   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:08.365867   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:08.372156   20887 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0728 15:11:08.372176   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:08.372206   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:08 GMT
	I0728 15:11:08.372217   20887 round_trippers.go:580]     Audit-Id: 339079bb-6191-48db-8e1f-28f54811a523
	I0728 15:11:08.372235   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:08.372252   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:08.372268   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:08.372282   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:08.373463   20887 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"801"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"797","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},
"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".": [truncated 84989 chars]
	I0728 15:11:08.375305   20887 system_pods.go:86] 12 kube-system pods found
	I0728 15:11:08.375316   20887 system_pods.go:89] "coredns-6d4b75cb6d-dfxk7" [ea8a6018-c281-45ec-bbb7-19f2988aa884] Running
	I0728 15:11:08.375320   20887 system_pods.go:89] "etcd-multinode-20220728150610-12923" [2d4683e7-2e93-41c5-af51-5181a7c29edd] Running
	I0728 15:11:08.375325   20887 system_pods.go:89] "kindnet-52mvf" [ef5b2400-09e0-4d0c-98b9-d520fd42e827] Running
	I0728 15:11:08.375329   20887 system_pods.go:89] "kindnet-tlp2m" [b535556f-fbbe-4220-9037-5016f1b8fb51] Running
	I0728 15:11:08.375332   20887 system_pods.go:89] "kindnet-v5hq8" [3410f1c1-9947-4a08-8503-660caf65dc5c] Running
	I0728 15:11:08.375336   20887 system_pods.go:89] "kube-apiserver-multinode-20220728150610-12923" [34425f5f-5cbc-4e7c-89b3-e4758c44f162] Running
	I0728 15:11:08.375340   20887 system_pods.go:89] "kube-controller-manager-multinode-20220728150610-12923" [92841ab9-f773-435e-a133-794e0d8e0cef] Running
	I0728 15:11:08.375344   20887 system_pods.go:89] "kube-proxy-bxdk6" [befca8fa-aef6-415a-b033-8522067db320] Running
	I0728 15:11:08.375347   20887 system_pods.go:89] "kube-proxy-cdz7z" [f9727653-ed51-43f3-95ad-fd2f5fb0ac6e] Running
	I0728 15:11:08.375350   20887 system_pods.go:89] "kube-proxy-cn9x2" [813dc8a0-2ea3-4ee9-83ce-fe09ccf38295] Running
	I0728 15:11:08.375354   20887 system_pods.go:89] "kube-scheduler-multinode-20220728150610-12923" [ef5d84ce-4249-4af0-b1be-7a3d7f8c2205] Running
	I0728 15:11:08.375359   20887 system_pods.go:89] "storage-provisioner" [29238934-2c0b-4262-80ff-12975d44a715] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 15:11:08.375363   20887 system_pods.go:126] duration metric: took 206.443876ms to wait for k8s-apps to be running ...
	I0728 15:11:08.375368   20887 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 15:11:08.375418   20887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:11:08.384763   20887 system_svc.go:56] duration metric: took 9.390628ms WaitForService to wait for kubelet.
	I0728 15:11:08.384775   20887 kubeadm.go:572] duration metric: took 23.091466866s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0728 15:11:08.384792   20887 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:11:08.563801   20887 request.go:533] Waited for 178.886144ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/nodes
	I0728 15:11:08.563847   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes
	I0728 15:11:08.563855   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:08.563868   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:08.563877   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:08.567623   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:08.567636   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:08.567644   20887 round_trippers.go:580]     Audit-Id: 8bcc8759-8301-44ce-9f23-dea79764f4d7
	I0728 15:11:08.567653   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:08.567658   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:08.567663   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:08.567667   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:08.567671   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:08 GMT
	I0728 15:11:08.567883   20887 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"801"},"items":[{"metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-m
anaged-attach-detach":"true"},"managedFields":[{"manager":"kubelet","op [truncated 16208 chars]
	I0728 15:11:08.568289   20887 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:11:08.568297   20887 node_conditions.go:123] node cpu capacity is 6
	I0728 15:11:08.568305   20887 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:11:08.568308   20887 node_conditions.go:123] node cpu capacity is 6
	I0728 15:11:08.568311   20887 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:11:08.568315   20887 node_conditions.go:123] node cpu capacity is 6
	I0728 15:11:08.568318   20887 node_conditions.go:105] duration metric: took 183.523754ms to run NodePressure ...
	I0728 15:11:08.568326   20887 start.go:216] waiting for startup goroutines ...
	I0728 15:11:08.568988   20887 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:11:08.569053   20887 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/config.json ...
	I0728 15:11:08.612976   20887 out.go:177] * Starting worker node multinode-20220728150610-12923-m02 in cluster multinode-20220728150610-12923
	I0728 15:11:08.634912   20887 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:11:08.656819   20887 out.go:177] * Pulling base image ...
	I0728 15:11:08.677880   20887 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:11:08.677897   20887 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:11:08.677903   20887 cache.go:57] Caching tarball of preloaded images
	I0728 15:11:08.677993   20887 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:11:08.678003   20887 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 15:11:08.678387   20887 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/config.json ...
	I0728 15:11:08.742068   20887 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:11:08.742081   20887 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:11:08.742090   20887 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:11:08.742151   20887 start.go:370] acquiring machines lock for multinode-20220728150610-12923-m02: {Name:mkeb9492df24fdad2e36a2cb175959a1c4df7525 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:11:08.742219   20887 start.go:374] acquired machines lock for "multinode-20220728150610-12923-m02" in 56.341µs
	I0728 15:11:08.742235   20887 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:11:08.742240   20887 fix.go:55] fixHost starting: m02
	I0728 15:11:08.742457   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923-m02 --format={{.State.Status}}
	I0728 15:11:08.805899   20887 fix.go:103] recreateIfNeeded on multinode-20220728150610-12923-m02: state=Stopped err=<nil>
	W0728 15:11:08.805923   20887 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:11:08.849467   20887 out.go:177] * Restarting existing docker container for "multinode-20220728150610-12923-m02" ...
	I0728 15:11:08.870781   20887 cli_runner.go:164] Run: docker start multinode-20220728150610-12923-m02
	I0728 15:11:09.219421   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923-m02 --format={{.State.Status}}
	I0728 15:11:09.285547   20887 kic.go:415] container "multinode-20220728150610-12923-m02" state is running.
	I0728 15:11:09.286384   20887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923-m02
	I0728 15:11:09.354382   20887 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/config.json ...
	I0728 15:11:09.354781   20887 machine.go:88] provisioning docker machine ...
	I0728 15:11:09.354797   20887 ubuntu.go:169] provisioning hostname "multinode-20220728150610-12923-m02"
	I0728 15:11:09.354857   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:09.482595   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:11:09.482758   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56641 <nil> <nil>}
	I0728 15:11:09.482771   20887 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220728150610-12923-m02 && echo "multinode-20220728150610-12923-m02" | sudo tee /etc/hostname
	I0728 15:11:09.609831   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220728150610-12923-m02
	
	I0728 15:11:09.609997   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:09.675696   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:11:09.676000   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56641 <nil> <nil>}
	I0728 15:11:09.676035   20887 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220728150610-12923-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220728150610-12923-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220728150610-12923-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:11:09.795111   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:11:09.795132   20887 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:11:09.795150   20887 ubuntu.go:177] setting up certificates
	I0728 15:11:09.795159   20887 provision.go:83] configureAuth start
	I0728 15:11:09.795237   20887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923-m02
	I0728 15:11:09.861999   20887 provision.go:138] copyHostCerts
	I0728 15:11:09.862061   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:11:09.862129   20887 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:11:09.862138   20887 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:11:09.862232   20887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:11:09.862391   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:11:09.862429   20887 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:11:09.862434   20887 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:11:09.862496   20887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:11:09.862612   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:11:09.862637   20887 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:11:09.862642   20887 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:11:09.862704   20887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:11:09.862823   20887 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.multinode-20220728150610-12923-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220728150610-12923-m02]
	I0728 15:11:09.936967   20887 provision.go:172] copyRemoteCerts
	I0728 15:11:09.937021   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:11:09.937074   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:10.001888   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56641 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923-m02/id_rsa Username:docker}
	I0728 15:11:10.090024   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 15:11:10.090103   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0728 15:11:10.123659   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 15:11:10.123756   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 15:11:10.142514   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 15:11:10.142586   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:11:10.167080   20887 provision.go:86] duration metric: configureAuth took 371.914455ms
	I0728 15:11:10.167094   20887 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:11:10.167288   20887 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:11:10.167365   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:10.231636   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:11:10.231792   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56641 <nil> <nil>}
	I0728 15:11:10.231801   20887 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:11:10.350238   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:11:10.350250   20887 ubuntu.go:71] root file system type: overlay
	I0728 15:11:10.350365   20887 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:11:10.350874   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:10.417112   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:11:10.417255   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56641 <nil> <nil>}
	I0728 15:11:10.417304   20887 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:11:10.548474   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:11:10.548561   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:10.612360   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:11:10.612516   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56641 <nil> <nil>}
	I0728 15:11:10.612530   20887 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:11:10.734617   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:11:10.734633   20887 machine.go:91] provisioned docker machine in 1.379858272s
	I0728 15:11:10.734640   20887 start.go:307] post-start starting for "multinode-20220728150610-12923-m02" (driver="docker")
	I0728 15:11:10.734644   20887 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:11:10.734732   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:11:10.734782   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:10.800798   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56641 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923-m02/id_rsa Username:docker}
	I0728 15:11:10.888928   20887 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:11:10.892245   20887 command_runner.go:130] > NAME="Ubuntu"
	I0728 15:11:10.892261   20887 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0728 15:11:10.892268   20887 command_runner.go:130] > ID=ubuntu
	I0728 15:11:10.892278   20887 command_runner.go:130] > ID_LIKE=debian
	I0728 15:11:10.892285   20887 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0728 15:11:10.892291   20887 command_runner.go:130] > VERSION_ID="20.04"
	I0728 15:11:10.892296   20887 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0728 15:11:10.892303   20887 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0728 15:11:10.892308   20887 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0728 15:11:10.892316   20887 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0728 15:11:10.892324   20887 command_runner.go:130] > VERSION_CODENAME=focal
	I0728 15:11:10.892329   20887 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0728 15:11:10.892427   20887 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:11:10.892445   20887 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:11:10.892452   20887 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:11:10.892458   20887 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:11:10.892464   20887 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:11:10.892574   20887 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:11:10.892704   20887 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:11:10.892712   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> /etc/ssl/certs/129232.pem
	I0728 15:11:10.892846   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:11:10.900165   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:11:10.916534   20887 start.go:310] post-start completed in 181.884315ms
	I0728 15:11:10.916602   20887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:11:10.916655   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:10.980902   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56641 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923-m02/id_rsa Username:docker}
	I0728 15:11:11.065850   20887 command_runner.go:130] > 12%
	I0728 15:11:11.065904   20887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:11:11.070247   20887 command_runner.go:130] > 49G
	I0728 15:11:11.070577   20887 fix.go:57] fixHost completed within 2.328355462s
	I0728 15:11:11.070597   20887 start.go:82] releasing machines lock for "multinode-20220728150610-12923-m02", held for 2.328384474s
	I0728 15:11:11.070676   20887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923-m02
	I0728 15:11:11.155177   20887 out.go:177] * Found network options:
	I0728 15:11:11.176926   20887 out.go:177]   - NO_PROXY=192.168.58.2
	W0728 15:11:11.197862   20887 proxy.go:118] fail to check proxy env: Error ip not in block
	W0728 15:11:11.197906   20887 proxy.go:118] fail to check proxy env: Error ip not in block
	I0728 15:11:11.198135   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 15:11:11.198146   20887 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:11:11.198191   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:11.198213   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:11.266458   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56641 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923-m02/id_rsa Username:docker}
	I0728 15:11:11.266597   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56641 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923-m02/id_rsa Username:docker}
	I0728 15:11:11.354149   20887 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0728 15:11:11.369112   20887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:11:11.536575   20887 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0728 15:11:11.536586   20887 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0728 15:11:11.536591   20887 command_runner.go:130] > <H1>302 Moved</H1>
	I0728 15:11:11.536594   20887 command_runner.go:130] > The document has moved
	I0728 15:11:11.536598   20887 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0728 15:11:11.536601   20887 command_runner.go:130] > </BODY></HTML>
	I0728 15:11:11.537900   20887 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0728 15:11:11.632120   20887 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:11:11.642068   20887 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0728 15:11:11.642078   20887 command_runner.go:130] > [Unit]
	I0728 15:11:11.642083   20887 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 15:11:11.642087   20887 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 15:11:11.642090   20887 command_runner.go:130] > BindsTo=containerd.service
	I0728 15:11:11.642095   20887 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0728 15:11:11.642099   20887 command_runner.go:130] > Wants=network-online.target
	I0728 15:11:11.642105   20887 command_runner.go:130] > Requires=docker.socket
	I0728 15:11:11.642109   20887 command_runner.go:130] > StartLimitBurst=3
	I0728 15:11:11.642112   20887 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 15:11:11.642115   20887 command_runner.go:130] > [Service]
	I0728 15:11:11.642119   20887 command_runner.go:130] > Type=notify
	I0728 15:11:11.642123   20887 command_runner.go:130] > Restart=on-failure
	I0728 15:11:11.642132   20887 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0728 15:11:11.642140   20887 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 15:11:11.642146   20887 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 15:11:11.642152   20887 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 15:11:11.642158   20887 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 15:11:11.642163   20887 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 15:11:11.642170   20887 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 15:11:11.642176   20887 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 15:11:11.642187   20887 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 15:11:11.642193   20887 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 15:11:11.642197   20887 command_runner.go:130] > ExecStart=
	I0728 15:11:11.642209   20887 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0728 15:11:11.642217   20887 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 15:11:11.642222   20887 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 15:11:11.642228   20887 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 15:11:11.642231   20887 command_runner.go:130] > LimitNOFILE=infinity
	I0728 15:11:11.642234   20887 command_runner.go:130] > LimitNPROC=infinity
	I0728 15:11:11.642238   20887 command_runner.go:130] > LimitCORE=infinity
	I0728 15:11:11.642243   20887 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 15:11:11.642248   20887 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 15:11:11.642251   20887 command_runner.go:130] > TasksMax=infinity
	I0728 15:11:11.642255   20887 command_runner.go:130] > TimeoutStartSec=0
	I0728 15:11:11.642260   20887 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 15:11:11.642264   20887 command_runner.go:130] > Delegate=yes
	I0728 15:11:11.642268   20887 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 15:11:11.642271   20887 command_runner.go:130] > KillMode=process
	I0728 15:11:11.642280   20887 command_runner.go:130] > [Install]
	I0728 15:11:11.642284   20887 command_runner.go:130] > WantedBy=multi-user.target
	I0728 15:11:11.642298   20887 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:11:11.642350   20887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:11:11.651473   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:11:11.664796   20887 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 15:11:11.664809   20887 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 15:11:11.665635   20887 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:11:11.746043   20887 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:11:11.816441   20887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:11:11.892689   20887 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:11:12.114905   20887 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 15:11:12.188963   20887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:11:12.267111   20887 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 15:11:12.276414   20887 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 15:11:12.276482   20887 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 15:11:12.280028   20887 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0728 15:11:12.280038   20887 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0728 15:11:12.280046   20887 command_runner.go:130] > Device: 10002fh/1048623d	Inode: 134         Links: 1
	I0728 15:11:12.280052   20887 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0728 15:11:12.280058   20887 command_runner.go:130] > Access: 2022-07-28 22:11:11.566651516 +0000
	I0728 15:11:12.280067   20887 command_runner.go:130] > Modify: 2022-07-28 22:11:11.566651516 +0000
	I0728 15:11:12.280074   20887 command_runner.go:130] > Change: 2022-07-28 22:11:11.575651516 +0000
	I0728 15:11:12.280079   20887 command_runner.go:130] >  Birth: -
	I0728 15:11:12.280215   20887 start.go:471] Will wait 60s for crictl version
	I0728 15:11:12.280257   20887 ssh_runner.go:195] Run: sudo crictl version
	I0728 15:11:12.306008   20887 command_runner.go:130] > Version:  0.1.0
	I0728 15:11:12.306019   20887 command_runner.go:130] > RuntimeName:  docker
	I0728 15:11:12.306022   20887 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0728 15:11:12.306026   20887 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0728 15:11:12.307991   20887 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 15:11:12.308065   20887 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:11:12.340484   20887 command_runner.go:130] > 20.10.17
	I0728 15:11:12.343408   20887 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:11:12.376555   20887 command_runner.go:130] > 20.10.17
	I0728 15:11:12.422851   20887 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 15:11:12.443932   20887 out.go:177]   - env NO_PROXY=192.168.58.2
	I0728 15:11:12.464964   20887 cli_runner.go:164] Run: docker exec -t multinode-20220728150610-12923-m02 dig +short host.docker.internal
	I0728 15:11:12.583851   20887 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:11:12.583938   20887 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:11:12.588188   20887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:11:12.597159   20887 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923 for IP: 192.168.58.3
	I0728 15:11:12.597283   20887 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:11:12.597333   20887 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:11:12.597340   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 15:11:12.597360   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 15:11:12.597379   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 15:11:12.597400   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 15:11:12.597492   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:11:12.597529   20887 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:11:12.597541   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:11:12.597579   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:11:12.597611   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:11:12.597641   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:11:12.597707   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:11:12.597736   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem -> /usr/share/ca-certificates/12923.pem
	I0728 15:11:12.597753   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> /usr/share/ca-certificates/129232.pem
	I0728 15:11:12.597768   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:11:12.598120   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:11:12.615188   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:11:12.631915   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:11:12.649444   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:11:12.666887   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:11:12.684100   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:11:12.700512   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:11:12.717728   20887 ssh_runner.go:195] Run: openssl version
	I0728 15:11:12.723406   20887 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0728 15:11:12.723541   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:11:12.731289   20887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:11:12.735144   20887 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:11:12.735164   20887 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:11:12.735201   20887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:11:12.740385   20887 command_runner.go:130] > b5213941
	I0728 15:11:12.740715   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:11:12.748119   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:11:12.756348   20887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:11:12.760441   20887 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:11:12.760461   20887 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:11:12.760498   20887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:11:12.765511   20887 command_runner.go:130] > 51391683
	I0728 15:11:12.765888   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:11:12.773286   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:11:12.781444   20887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:11:12.785347   20887 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:11:12.785386   20887 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:11:12.785432   20887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:11:12.790384   20887 command_runner.go:130] > 3ec20f2e
	I0728 15:11:12.790661   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:11:12.797946   20887 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:11:12.866746   20887 command_runner.go:130] > systemd
	I0728 15:11:12.869931   20887 cni.go:95] Creating CNI manager for ""
	I0728 15:11:12.869941   20887 cni.go:156] 3 nodes found, recommending kindnet
	I0728 15:11:12.869962   20887 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:11:12.869972   20887 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220728150610-12923 NodeName:multinode-20220728150610-12923-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:11:12.870069   20887 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220728150610-12923-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:11:12.870127   20887 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220728150610-12923-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728150610-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:11:12.870188   20887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 15:11:12.877349   20887 command_runner.go:130] > kubeadm
	I0728 15:11:12.877357   20887 command_runner.go:130] > kubectl
	I0728 15:11:12.877360   20887 command_runner.go:130] > kubelet
	I0728 15:11:12.878031   20887 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:11:12.878080   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0728 15:11:12.885505   20887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (496 bytes)
	I0728 15:11:12.898035   20887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:11:12.911674   20887 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:11:12.915275   20887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:11:12.924349   20887 host.go:66] Checking if "multinode-20220728150610-12923" exists ...
	I0728 15:11:12.924515   20887 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:11:12.924521   20887 start.go:285] JoinCluster: &{Name:multinode-20220728150610-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728150610-12923 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewe
r:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:11:12.924592   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0728 15:11:12.924636   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:11:12.989637   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:11:13.122618   20887 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 
	I0728 15:11:13.122659   20887 start.go:298] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:11:13.122679   20887 host.go:66] Checking if "multinode-20220728150610-12923" exists ...
	I0728 15:11:13.122920   20887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl drain multinode-20220728150610-12923-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0728 15:11:13.122975   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:11:13.188038   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:11:13.309811   20887 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0728 15:11:13.335024   20887 command_runner.go:130] ! WARNING: ignoring DaemonSet-managed Pods: kube-system/kindnet-v5hq8, kube-system/kube-proxy-bxdk6
	I0728 15:11:16.344328   20887 command_runner.go:130] > node/multinode-20220728150610-12923-m02 cordoned
	I0728 15:11:16.344348   20887 command_runner.go:130] > pod "busybox-d46db594c-vg2w2" has DeletionTimestamp older than 1 seconds, skipping
	I0728 15:11:16.344353   20887 command_runner.go:130] > node/multinode-20220728150610-12923-m02 drained
	I0728 15:11:16.344367   20887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl drain multinode-20220728150610-12923-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.221464776s)
	I0728 15:11:16.344376   20887 node.go:109] successfully drained node "m02"
	I0728 15:11:16.344693   20887 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:11:16.344940   20887 kapi.go:59] client config for multinode-20220728150610-12923: &rest.Config{Host:"https://127.0.0.1:56607", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-2022072815061
0-12923/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:11:16.345183   20887 request.go:1073] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0728 15:11:16.345210   20887 round_trippers.go:463] DELETE https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923-m02
	I0728 15:11:16.345214   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:16.345220   20887 round_trippers.go:473]     Content-Type: application/json
	I0728 15:11:16.345228   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:16.345233   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:16.348628   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:16.348643   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:16.348651   20887 round_trippers.go:580]     Audit-Id: 544b1357-f85a-4ec7-ab63-98f15a25bae7
	I0728 15:11:16.348658   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:16.348665   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:16.348672   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:16.348682   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:16.348688   20887 round_trippers.go:580]     Content-Length: 185
	I0728 15:11:16.348693   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:16 GMT
	I0728 15:11:16.348705   20887 request.go:1073] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-20220728150610-12923-m02","kind":"nodes","uid":"2d4cf78c-ed12-4ef8-9967-e85ae2ffd232"}}
	I0728 15:11:16.348725   20887 node.go:125] successfully deleted node "m02"
	I0728 15:11:16.348731   20887 start.go:302] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:11:16.348747   20887 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:11:16.348759   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02"
	I0728 15:11:16.408615   20887 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 15:11:16.524859   20887 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 15:11:16.524888   20887 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 15:11:16.543951   20887 command_runner.go:130] ! W0728 22:11:16.428638    1126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:11:16.543966   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0728 15:11:16.543988   20887 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0728 15:11:16.543998   20887 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0728 15:11:16.544005   20887 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0728 15:11:16.544012   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0728 15:11:16.544024   20887 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0728 15:11:16.544036   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0728 15:11:16.544075   20887 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:11:16.428638    1126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:16.544085   20887 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0728 15:11:16.544092   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force"
	I0728 15:11:16.577438   20887 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0728 15:11:16.577455   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:16.577478   20887 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:16.577513   20887 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:11:16.428638    1126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:27.624194   20887 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:11:27.624334   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02"
	I0728 15:11:27.658783   20887 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 15:11:27.762757   20887 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 15:11:27.762770   20887 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 15:11:27.780178   20887 command_runner.go:130] ! W0728 22:11:27.664907    1640 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:11:27.780192   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0728 15:11:27.780207   20887 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0728 15:11:27.780212   20887 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0728 15:11:27.780216   20887 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0728 15:11:27.780222   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0728 15:11:27.780232   20887 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0728 15:11:27.780240   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0728 15:11:27.780266   20887 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:11:27.664907    1640 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:27.780278   20887 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0728 15:11:27.780287   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force"
	I0728 15:11:27.816375   20887 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0728 15:11:27.816391   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:27.816407   20887 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:27.816419   20887 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:11:27.664907    1640 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:49.424452   20887 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:11:49.424543   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02"
	I0728 15:11:49.458511   20887 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 15:11:49.569686   20887 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 15:11:49.569700   20887 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 15:11:49.586634   20887 command_runner.go:130] ! W0728 22:11:49.458810    1871 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:11:49.586649   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0728 15:11:49.586657   20887 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0728 15:11:49.586667   20887 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0728 15:11:49.586677   20887 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0728 15:11:49.586683   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0728 15:11:49.586692   20887 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0728 15:11:49.586699   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0728 15:11:49.586729   20887 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:11:49.458810    1871 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:49.586737   20887 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0728 15:11:49.586749   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force"
	I0728 15:11:49.620524   20887 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0728 15:11:49.620538   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:49.620553   20887 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:49.620563   20887 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:11:49.458810    1871 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:15.825070   20887 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:12:15.834433   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02"
	I0728 15:12:15.867913   20887 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 15:12:15.973713   20887 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 15:12:15.973727   20887 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 15:12:15.991632   20887 command_runner.go:130] ! W0728 22:12:15.887398    2126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:12:15.991645   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0728 15:12:15.991653   20887 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0728 15:12:15.991658   20887 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0728 15:12:15.991662   20887 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0728 15:12:15.991668   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0728 15:12:15.991677   20887 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0728 15:12:15.991682   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0728 15:12:15.991710   20887 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:12:15.887398    2126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:15.991717   20887 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0728 15:12:15.991725   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force"
	I0728 15:12:16.026740   20887 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0728 15:12:16.026757   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:16.026777   20887 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:16.026789   20887 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:12:15.887398    2126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:47.674673   20887 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:12:47.674715   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02"
	I0728 15:12:47.708880   20887 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 15:12:47.810500   20887 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 15:12:47.810513   20887 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 15:12:47.829278   20887 command_runner.go:130] ! W0728 22:12:47.729965    2439 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:12:47.829292   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0728 15:12:47.829301   20887 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0728 15:12:47.829305   20887 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0728 15:12:47.829311   20887 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0728 15:12:47.829318   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0728 15:12:47.829327   20887 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0728 15:12:47.829333   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0728 15:12:47.829358   20887 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:12:47.729965    2439 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:47.829367   20887 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0728 15:12:47.829374   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force"
	I0728 15:12:47.864820   20887 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0728 15:12:47.864837   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:47.864860   20887 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:47.864873   20887 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:12:47.729965    2439 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:13:34.676416   20887 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:13:34.676475   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02"
	I0728 15:13:34.711703   20887 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 15:13:34.817378   20887 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 15:13:34.817398   20887 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 15:13:34.834797   20887 command_runner.go:130] ! W0728 22:13:34.723396    2846 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:13:34.834811   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0728 15:13:34.834822   20887 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0728 15:13:34.834827   20887 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0728 15:13:34.834832   20887 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0728 15:13:34.834837   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0728 15:13:34.834848   20887 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0728 15:13:34.834857   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0728 15:13:34.834898   20887 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:13:34.723396    2846 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:13:34.834908   20887 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0728 15:13:34.834917   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force"
	I0728 15:13:34.870155   20887 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0728 15:13:34.870174   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0728 15:13:34.870194   20887 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:13:34.870211   20887 start.go:287] JoinCluster complete in 2m21.947191876s
	I0728 15:13:34.892247   20887 out.go:177] 
	W0728 15:13:34.914357   20887 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:13:34.723396    2846 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:13:34.723396    2846 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:13:34.914387   20887 out.go:239] * 
	* 
	W0728 15:13:34.915469   20887 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 15:13:34.978957   20887 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:295: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-20220728150610-12923" : exit status 80
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220728150610-12923
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-20220728150610-12923
helpers_test.go:235: (dbg) docker inspect multinode-20220728150610-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b39b419ecf77010a834dffdbc20b9a3e6bf6aae73c4ee639189b03527faa49c7",
	        "Created": "2022-07-28T22:06:17.94053837Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 91090,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:10:17.046008857Z",
	            "FinishedAt": "2022-07-28T22:09:51.179675917Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/b39b419ecf77010a834dffdbc20b9a3e6bf6aae73c4ee639189b03527faa49c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b39b419ecf77010a834dffdbc20b9a3e6bf6aae73c4ee639189b03527faa49c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/b39b419ecf77010a834dffdbc20b9a3e6bf6aae73c4ee639189b03527faa49c7/hosts",
	        "LogPath": "/var/lib/docker/containers/b39b419ecf77010a834dffdbc20b9a3e6bf6aae73c4ee639189b03527faa49c7/b39b419ecf77010a834dffdbc20b9a3e6bf6aae73c4ee639189b03527faa49c7-json.log",
	        "Name": "/multinode-20220728150610-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-20220728150610-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-20220728150610-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a52565b39a2b9fb38cf1d04fe27997da874636df7a5a3c27e04090d63d1c1718-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a52565b39a2b9fb38cf1d04fe27997da874636df7a5a3c27e04090d63d1c1718/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a52565b39a2b9fb38cf1d04fe27997da874636df7a5a3c27e04090d63d1c1718/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a52565b39a2b9fb38cf1d04fe27997da874636df7a5a3c27e04090d63d1c1718/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-20220728150610-12923",
	                "Source": "/var/lib/docker/volumes/multinode-20220728150610-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-20220728150610-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-20220728150610-12923",
	                "name.minikube.sigs.k8s.io": "multinode-20220728150610-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "07d9c606eaf558ccb992a99d1d8b8aaeb58a44dbdc5cc2da347f6215843387fa",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56608"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56609"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56610"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56611"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56607"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/07d9c606eaf5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-20220728150610-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b39b419ecf77",
	                        "multinode-20220728150610-12923"
	                    ],
	                    "NetworkID": "efd468510dcf83060bcd82025d61e9592003327a9c147f42b8af1f947e86fd70",
	                    "EndpointID": "46827855f8abdb3246fa7080c33f557f07030cef0610b5063915686fa1b80ccf",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-20220728150610-12923 -n multinode-20220728150610-12923
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220728150610-12923 logs -n 25: (3.395298971s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|---------|---------|---------------------|---------------------|
	| Command |                                                                   Args                                                                   |            Profile             |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-20220728150610-12923                                                                                                           | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:08 PDT | 28 Jul 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220728150610-12923-m02                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| cp      | multinode-20220728150610-12923 cp multinode-20220728150610-12923-m02:/home/docker/cp-test.txt                                            | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:08 PDT | 28 Jul 22 15:08 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile581616719/001/cp-test_multinode-20220728150610-12923-m02.txt |                                |         |         |                     |                     |
	| ssh     | multinode-20220728150610-12923                                                                                                           | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:08 PDT | 28 Jul 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220728150610-12923-m02                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| cp      | multinode-20220728150610-12923 cp multinode-20220728150610-12923-m02:/home/docker/cp-test.txt                                            | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:08 PDT | 28 Jul 22 15:08 PDT |
	|         | multinode-20220728150610-12923:/home/docker/cp-test_multinode-20220728150610-12923-m02_multinode-20220728150610-12923.txt                |                                |         |         |                     |                     |
	| ssh     | multinode-20220728150610-12923                                                                                                           | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:08 PDT | 28 Jul 22 15:08 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220728150610-12923-m02                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| ssh     | multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923 sudo cat                                                            | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:08 PDT | 28 Jul 22 15:08 PDT |
	|         | /home/docker/cp-test_multinode-20220728150610-12923-m02_multinode-20220728150610-12923.txt                                               |                                |         |         |                     |                     |
	| cp      | multinode-20220728150610-12923 cp multinode-20220728150610-12923-m02:/home/docker/cp-test.txt                                            | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:08 PDT | 28 Jul 22 15:08 PDT |
	|         | multinode-20220728150610-12923-m03:/home/docker/cp-test_multinode-20220728150610-12923-m02_multinode-20220728150610-12923-m03.txt        |                                |         |         |                     |                     |
	| ssh     | multinode-20220728150610-12923                                                                                                           | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220728150610-12923-m02                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| ssh     | multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m03 sudo cat                                                        | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | /home/docker/cp-test_multinode-20220728150610-12923-m02_multinode-20220728150610-12923-m03.txt                                           |                                |         |         |                     |                     |
	| cp      | multinode-20220728150610-12923 cp testdata/cp-test.txt                                                                                   | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | multinode-20220728150610-12923-m03:/home/docker/cp-test.txt                                                                              |                                |         |         |                     |                     |
	| ssh     | multinode-20220728150610-12923                                                                                                           | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220728150610-12923-m03                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| cp      | multinode-20220728150610-12923 cp multinode-20220728150610-12923-m03:/home/docker/cp-test.txt                                            | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile581616719/001/cp-test_multinode-20220728150610-12923-m03.txt |                                |         |         |                     |                     |
	| ssh     | multinode-20220728150610-12923                                                                                                           | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220728150610-12923-m03                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| cp      | multinode-20220728150610-12923 cp multinode-20220728150610-12923-m03:/home/docker/cp-test.txt                                            | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | multinode-20220728150610-12923:/home/docker/cp-test_multinode-20220728150610-12923-m03_multinode-20220728150610-12923.txt                |                                |         |         |                     |                     |
	| ssh     | multinode-20220728150610-12923                                                                                                           | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220728150610-12923-m03                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| ssh     | multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923 sudo cat                                                            | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | /home/docker/cp-test_multinode-20220728150610-12923-m03_multinode-20220728150610-12923.txt                                               |                                |         |         |                     |                     |
	| cp      | multinode-20220728150610-12923 cp multinode-20220728150610-12923-m03:/home/docker/cp-test.txt                                            | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | multinode-20220728150610-12923-m02:/home/docker/cp-test_multinode-20220728150610-12923-m03_multinode-20220728150610-12923-m02.txt        |                                |         |         |                     |                     |
	| ssh     | multinode-20220728150610-12923                                                                                                           | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | ssh -n                                                                                                                                   |                                |         |         |                     |                     |
	|         | multinode-20220728150610-12923-m03                                                                                                       |                                |         |         |                     |                     |
	|         | sudo cat /home/docker/cp-test.txt                                                                                                        |                                |         |         |                     |                     |
	| ssh     | multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m02 sudo cat                                                        | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | /home/docker/cp-test_multinode-20220728150610-12923-m03_multinode-20220728150610-12923-m02.txt                                           |                                |         |         |                     |                     |
	| node    | multinode-20220728150610-12923                                                                                                           | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | node stop m03                                                                                                                            |                                |         |         |                     |                     |
	| node    | multinode-20220728150610-12923                                                                                                           | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:09 PDT |
	|         | node start m03                                                                                                                           |                                |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                                        |                                |         |         |                     |                     |
	| node    | list -p                                                                                                                                  | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT |                     |
	|         | multinode-20220728150610-12923                                                                                                           |                                |         |         |                     |                     |
	| stop    | -p                                                                                                                                       | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:09 PDT | 28 Jul 22 15:10 PDT |
	|         | multinode-20220728150610-12923                                                                                                           |                                |         |         |                     |                     |
	| start   | -p                                                                                                                                       | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:10 PDT |                     |
	|         | multinode-20220728150610-12923                                                                                                           |                                |         |         |                     |                     |
	|         | --wait=true -v=8                                                                                                                         |                                |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                                        |                                |         |         |                     |                     |
	| node    | list -p                                                                                                                                  | multinode-20220728150610-12923 | jenkins | v1.26.0 | 28 Jul 22 15:13 PDT |                     |
	|         | multinode-20220728150610-12923                                                                                                           |                                |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 15:10:15
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 15:10:15.811246   20887 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:10:15.811450   20887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:10:15.811455   20887 out.go:309] Setting ErrFile to fd 2...
	I0728 15:10:15.811459   20887 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:10:15.811575   20887 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:10:15.812090   20887 out.go:303] Setting JSON to false
	I0728 15:10:15.827445   20887 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7257,"bootTime":1659038958,"procs":363,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:10:15.827547   20887 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:10:15.853819   20887 out.go:177] * [multinode-20220728150610-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:10:15.896697   20887 notify.go:193] Checking for updates...
	I0728 15:10:15.918713   20887 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:10:15.939773   20887 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:10:15.961874   20887 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:10:15.984659   20887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:10:16.005855   20887 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:10:16.028562   20887 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:10:16.028642   20887 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:10:16.097020   20887 docker.go:137] docker version: linux-20.10.17
	I0728 15:10:16.097237   20887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:10:16.226272   20887 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-28 22:10:16.17124985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:10:16.247923   20887 out.go:177] * Using the docker driver based on existing profile
	I0728 15:10:16.269365   20887 start.go:284] selected driver: docker
	I0728 15:10:16.269390   20887 start.go:808] validating driver "docker" against &{Name:multinode-20220728150610-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728150610-12923 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevir
t:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:10:16.269537   20887 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:10:16.269712   20887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:10:16.400959   20887 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-28 22:10:16.34597693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:10:16.403106   20887 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:10:16.403132   20887 cni.go:95] Creating CNI manager for ""
	I0728 15:10:16.403140   20887 cni.go:156] 3 nodes found, recommending kindnet
	I0728 15:10:16.403156   20887 start_flags.go:310] config:
	{Name:multinode-20220728150610-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728150610-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:10:16.425089   20887 out.go:177] * Starting control plane node multinode-20220728150610-12923 in cluster multinode-20220728150610-12923
	I0728 15:10:16.446841   20887 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:10:16.468735   20887 out.go:177] * Pulling base image ...
	I0728 15:10:16.511990   20887 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:10:16.512049   20887 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:10:16.512064   20887 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 15:10:16.512083   20887 cache.go:57] Caching tarball of preloaded images
	I0728 15:10:16.512285   20887 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:10:16.512307   20887 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 15:10:16.513316   20887 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/config.json ...
	I0728 15:10:16.576491   20887 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:10:16.576506   20887 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:10:16.576516   20887 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:10:16.576559   20887 start.go:370] acquiring machines lock for multinode-20220728150610-12923: {Name:mkd79d301f4101af8f61f3073fc793d92d8ea4af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:10:16.576641   20887 start.go:374] acquired machines lock for "multinode-20220728150610-12923" in 57.563µs
	I0728 15:10:16.576662   20887 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:10:16.576670   20887 fix.go:55] fixHost starting: 
	I0728 15:10:16.576905   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923 --format={{.State.Status}}
	I0728 15:10:16.639763   20887 fix.go:103] recreateIfNeeded on multinode-20220728150610-12923: state=Stopped err=<nil>
	W0728 15:10:16.639795   20887 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:10:16.683570   20887 out.go:177] * Restarting existing docker container for "multinode-20220728150610-12923" ...
	I0728 15:10:16.705605   20887 cli_runner.go:164] Run: docker start multinode-20220728150610-12923
	I0728 15:10:17.034676   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923 --format={{.State.Status}}
	I0728 15:10:17.098124   20887 kic.go:415] container "multinode-20220728150610-12923" state is running.
	I0728 15:10:17.098689   20887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923
	I0728 15:10:17.165300   20887 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/config.json ...
	I0728 15:10:17.165708   20887 machine.go:88] provisioning docker machine ...
	I0728 15:10:17.165733   20887 ubuntu.go:169] provisioning hostname "multinode-20220728150610-12923"
	I0728 15:10:17.165806   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:17.233380   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:10:17.233572   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0728 15:10:17.233586   20887 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220728150610-12923 && echo "multinode-20220728150610-12923" | sudo tee /etc/hostname
	I0728 15:10:17.363601   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220728150610-12923
	
	I0728 15:10:17.363692   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:17.428359   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:10:17.428540   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0728 15:10:17.428555   20887 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220728150610-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220728150610-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220728150610-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:10:17.546675   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:10:17.546695   20887 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:10:17.546717   20887 ubuntu.go:177] setting up certificates
	I0728 15:10:17.546732   20887 provision.go:83] configureAuth start
	I0728 15:10:17.546793   20887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923
	I0728 15:10:17.611863   20887 provision.go:138] copyHostCerts
	I0728 15:10:17.611917   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:10:17.611971   20887 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:10:17.611981   20887 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:10:17.612084   20887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:10:17.612268   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:10:17.612299   20887 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:10:17.612304   20887 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:10:17.612362   20887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:10:17.612479   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:10:17.612510   20887 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:10:17.612515   20887 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:10:17.612576   20887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:10:17.612690   20887 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.multinode-20220728150610-12923 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220728150610-12923]
	I0728 15:10:17.751768   20887 provision.go:172] copyRemoteCerts
	I0728 15:10:17.751838   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:10:17.751884   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:17.816422   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:17.905658   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 15:10:17.905732   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:10:17.922453   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 15:10:17.922514   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0728 15:10:17.938599   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 15:10:17.938669   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 15:10:17.954750   20887 provision.go:86] duration metric: configureAuth took 407.997233ms
	I0728 15:10:17.954762   20887 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:10:17.954913   20887 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:10:17.954963   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.017232   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:10:18.017390   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0728 15:10:18.017401   20887 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:10:18.137197   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:10:18.137228   20887 ubuntu.go:71] root file system type: overlay
	I0728 15:10:18.137404   20887 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:10:18.137486   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.201417   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:10:18.201580   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0728 15:10:18.201627   20887 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:10:18.331185   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:10:18.331301   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.393810   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:10:18.394025   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56608 <nil> <nil>}
	I0728 15:10:18.394044   20887 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:10:18.519288   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:10:18.519309   20887 machine.go:91] provisioned docker machine in 1.353576087s
	I0728 15:10:18.519318   20887 start.go:307] post-start starting for "multinode-20220728150610-12923" (driver="docker")
	I0728 15:10:18.519323   20887 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:10:18.519394   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:10:18.519440   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.583344   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:18.671962   20887 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:10:18.675232   20887 command_runner.go:130] > NAME="Ubuntu"
	I0728 15:10:18.675241   20887 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0728 15:10:18.675244   20887 command_runner.go:130] > ID=ubuntu
	I0728 15:10:18.675250   20887 command_runner.go:130] > ID_LIKE=debian
	I0728 15:10:18.675255   20887 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0728 15:10:18.675258   20887 command_runner.go:130] > VERSION_ID="20.04"
	I0728 15:10:18.675262   20887 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0728 15:10:18.675267   20887 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0728 15:10:18.675271   20887 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0728 15:10:18.675278   20887 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0728 15:10:18.675282   20887 command_runner.go:130] > VERSION_CODENAME=focal
	I0728 15:10:18.675285   20887 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0728 15:10:18.675436   20887 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:10:18.675453   20887 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:10:18.675460   20887 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:10:18.675464   20887 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:10:18.675475   20887 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:10:18.675583   20887 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:10:18.675715   20887 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:10:18.675721   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> /etc/ssl/certs/129232.pem
	I0728 15:10:18.675863   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:10:18.683199   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:10:18.700539   20887 start.go:310] post-start completed in 181.206774ms
	I0728 15:10:18.700610   20887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:10:18.700669   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.764313   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:18.848660   20887 command_runner.go:130] > 12%!
	(MISSING)I0728 15:10:18.848725   20887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:10:18.852620   20887 command_runner.go:130] > 49G
	I0728 15:10:18.852940   20887 fix.go:57] fixHost completed within 2.276240782s
	I0728 15:10:18.852949   20887 start.go:82] releasing machines lock for "multinode-20220728150610-12923", held for 2.276272026s
	I0728 15:10:18.853022   20887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923
	I0728 15:10:18.916111   20887 ssh_runner.go:195] Run: systemctl --version
	I0728 15:10:18.916116   20887 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:10:18.916188   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.916171   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:18.981606   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:18.981926   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:19.272350   20887 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0728 15:10:19.272368   20887 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0728 15:10:19.272376   20887 command_runner.go:130] > <H1>302 Moved</H1>
	I0728 15:10:19.272383   20887 command_runner.go:130] > The document has moved
	I0728 15:10:19.272401   20887 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0728 15:10:19.272409   20887 command_runner.go:130] > </BODY></HTML>
	I0728 15:10:19.273845   20887 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.17)
	I0728 15:10:19.273860   20887 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0728 15:10:19.273991   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 15:10:19.281044   20887 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0728 15:10:19.293233   20887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:10:19.360088   20887 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0728 15:10:19.446098   20887 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:10:19.455279   20887 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0728 15:10:19.455594   20887 command_runner.go:130] > [Unit]
	I0728 15:10:19.455603   20887 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 15:10:19.455608   20887 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 15:10:19.455611   20887 command_runner.go:130] > BindsTo=containerd.service
	I0728 15:10:19.455615   20887 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0728 15:10:19.455619   20887 command_runner.go:130] > Wants=network-online.target
	I0728 15:10:19.455624   20887 command_runner.go:130] > Requires=docker.socket
	I0728 15:10:19.455631   20887 command_runner.go:130] > StartLimitBurst=3
	I0728 15:10:19.455636   20887 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 15:10:19.455642   20887 command_runner.go:130] > [Service]
	I0728 15:10:19.455647   20887 command_runner.go:130] > Type=notify
	I0728 15:10:19.455652   20887 command_runner.go:130] > Restart=on-failure
	I0728 15:10:19.455659   20887 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 15:10:19.455671   20887 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 15:10:19.455677   20887 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 15:10:19.455682   20887 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 15:10:19.455689   20887 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 15:10:19.455694   20887 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 15:10:19.455714   20887 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 15:10:19.455727   20887 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 15:10:19.455734   20887 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 15:10:19.455737   20887 command_runner.go:130] > ExecStart=
	I0728 15:10:19.455750   20887 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0728 15:10:19.455754   20887 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 15:10:19.455760   20887 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 15:10:19.455765   20887 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 15:10:19.455769   20887 command_runner.go:130] > LimitNOFILE=infinity
	I0728 15:10:19.455772   20887 command_runner.go:130] > LimitNPROC=infinity
	I0728 15:10:19.455775   20887 command_runner.go:130] > LimitCORE=infinity
	I0728 15:10:19.455780   20887 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 15:10:19.455784   20887 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 15:10:19.455788   20887 command_runner.go:130] > TasksMax=infinity
	I0728 15:10:19.455804   20887 command_runner.go:130] > TimeoutStartSec=0
	I0728 15:10:19.455812   20887 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 15:10:19.455817   20887 command_runner.go:130] > Delegate=yes
	I0728 15:10:19.455833   20887 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 15:10:19.455842   20887 command_runner.go:130] > KillMode=process
	I0728 15:10:19.455847   20887 command_runner.go:130] > [Install]
	I0728 15:10:19.455851   20887 command_runner.go:130] > WantedBy=multi-user.target
	I0728 15:10:19.456420   20887 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:10:19.456473   20887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:10:19.466119   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:10:19.477913   20887 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 15:10:19.477925   20887 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 15:10:19.478666   20887 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:10:19.544989   20887 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:10:19.614701   20887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:10:19.680016   20887 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:10:19.910393   20887 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 15:10:19.979484   20887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:10:20.046781   20887 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 15:10:20.056062   20887 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 15:10:20.056123   20887 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 15:10:20.059866   20887 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0728 15:10:20.059877   20887 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0728 15:10:20.059883   20887 command_runner.go:130] > Device: 96h/150d	Inode: 113         Links: 1
	I0728 15:10:20.059892   20887 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0728 15:10:20.059899   20887 command_runner.go:130] > Access: 2022-07-28 22:10:19.369180917 +0000
	I0728 15:10:20.059906   20887 command_runner.go:130] > Modify: 2022-07-28 22:10:19.369180917 +0000
	I0728 15:10:20.059911   20887 command_runner.go:130] > Change: 2022-07-28 22:10:19.377180917 +0000
	I0728 15:10:20.059914   20887 command_runner.go:130] >  Birth: -
	I0728 15:10:20.060070   20887 start.go:471] Will wait 60s for crictl version
	I0728 15:10:20.060109   20887 ssh_runner.go:195] Run: sudo crictl version
	I0728 15:10:20.085604   20887 command_runner.go:130] > Version:  0.1.0
	I0728 15:10:20.085615   20887 command_runner.go:130] > RuntimeName:  docker
	I0728 15:10:20.085619   20887 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0728 15:10:20.085624   20887 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0728 15:10:20.087675   20887 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 15:10:20.087754   20887 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:10:20.118335   20887 command_runner.go:130] > 20.10.17
	I0728 15:10:20.121465   20887 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:10:20.153527   20887 command_runner.go:130] > 20.10.17
	I0728 15:10:20.198946   20887 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 15:10:20.199157   20887 cli_runner.go:164] Run: docker exec -t multinode-20220728150610-12923 dig +short host.docker.internal
	I0728 15:10:20.319120   20887 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:10:20.319232   20887 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:10:20.323265   20887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:10:20.332421   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:20.395207   20887 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:10:20.395280   20887 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:10:20.422104   20887 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.24.3
	I0728 15:10:20.422121   20887 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.24.3
	I0728 15:10:20.422127   20887 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.24.3
	I0728 15:10:20.422141   20887 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.24.3
	I0728 15:10:20.422146   20887 command_runner.go:130] > kindest/kindnetd:v20220510-4929dd75
	I0728 15:10:20.422151   20887 command_runner.go:130] > k8s.gcr.io/etcd:3.5.3-0
	I0728 15:10:20.422155   20887 command_runner.go:130] > k8s.gcr.io/pause:3.7
	I0728 15:10:20.422159   20887 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0728 15:10:20.422163   20887 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0728 15:10:20.422167   20887 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:10:20.422170   20887 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0728 15:10:20.424699   20887 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	kindest/kindnetd:v20220510-4929dd75
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0728 15:10:20.424714   20887 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:10:20.424785   20887 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:10:20.451355   20887 command_runner.go:130] > k8s.gcr.io/kube-apiserver:v1.24.3
	I0728 15:10:20.451367   20887 command_runner.go:130] > k8s.gcr.io/kube-controller-manager:v1.24.3
	I0728 15:10:20.451372   20887 command_runner.go:130] > k8s.gcr.io/kube-proxy:v1.24.3
	I0728 15:10:20.451376   20887 command_runner.go:130] > k8s.gcr.io/kube-scheduler:v1.24.3
	I0728 15:10:20.451385   20887 command_runner.go:130] > kindest/kindnetd:v20220510-4929dd75
	I0728 15:10:20.451390   20887 command_runner.go:130] > k8s.gcr.io/etcd:3.5.3-0
	I0728 15:10:20.451394   20887 command_runner.go:130] > k8s.gcr.io/pause:3.7
	I0728 15:10:20.451402   20887 command_runner.go:130] > k8s.gcr.io/coredns/coredns:v1.8.6
	I0728 15:10:20.451408   20887 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I0728 15:10:20.451417   20887 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:10:20.451428   20887 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0728 15:10:20.454268   20887 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	kindest/kindnetd:v20220510-4929dd75
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0728 15:10:20.454291   20887 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:10:20.454368   20887 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:10:20.522620   20887 command_runner.go:130] > systemd
	I0728 15:10:20.525691   20887 cni.go:95] Creating CNI manager for ""
	I0728 15:10:20.525702   20887 cni.go:156] 3 nodes found, recommending kindnet
	I0728 15:10:20.525717   20887 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:10:20.525735   20887 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220728150610-12923 NodeName:multinode-20220728150610-12923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:10:20.525865   20887 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220728150610-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:10:20.525956   20887 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220728150610-12923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728150610-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:10:20.526025   20887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 15:10:20.532903   20887 command_runner.go:130] > kubeadm
	I0728 15:10:20.532912   20887 command_runner.go:130] > kubectl
	I0728 15:10:20.532920   20887 command_runner.go:130] > kubelet
	I0728 15:10:20.533757   20887 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:10:20.533802   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:10:20.540651   20887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (492 bytes)
	I0728 15:10:20.552624   20887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:10:20.564863   20887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2052 bytes)
	I0728 15:10:20.577323   20887 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:10:20.581045   20887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:10:20.590597   20887 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923 for IP: 192.168.58.2
	I0728 15:10:20.590705   20887 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:10:20.590756   20887 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:10:20.590840   20887 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/client.key
	I0728 15:10:20.590898   20887 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/apiserver.key.cee25041
	I0728 15:10:20.590943   20887 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/proxy-client.key
	I0728 15:10:20.590949   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0728 15:10:20.591003   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0728 15:10:20.591031   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0728 15:10:20.591055   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0728 15:10:20.591073   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 15:10:20.591088   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 15:10:20.591104   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 15:10:20.591119   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 15:10:20.591230   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:10:20.591271   20887 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:10:20.591283   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:10:20.591317   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:10:20.591351   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:10:20.591381   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:10:20.591446   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:10:20.591480   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem -> /usr/share/ca-certificates/12923.pem
	I0728 15:10:20.591498   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> /usr/share/ca-certificates/129232.pem
	I0728 15:10:20.591515   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:10:20.592036   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:10:20.608345   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0728 15:10:20.624811   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:10:20.641977   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 15:10:20.658508   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:10:20.674687   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:10:20.691379   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:10:20.707949   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:10:20.724111   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:10:20.740179   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:10:20.756547   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:10:20.772557   20887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:10:20.784846   20887 ssh_runner.go:195] Run: openssl version
	I0728 15:10:20.789698   20887 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0728 15:10:20.790002   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:10:20.797570   20887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:10:20.801274   20887 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:10:20.801291   20887 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:10:20.801327   20887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:10:20.805951   20887 command_runner.go:130] > 51391683
	I0728 15:10:20.806188   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:10:20.813288   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:10:20.844826   20887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:10:20.848327   20887 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:10:20.848495   20887 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:10:20.848546   20887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:10:20.853229   20887 command_runner.go:130] > 3ec20f2e
	I0728 15:10:20.853528   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:10:20.860746   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:10:20.868316   20887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:10:20.871931   20887 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:10:20.872084   20887 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:10:20.872141   20887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:10:20.876832   20887 command_runner.go:130] > b5213941
	I0728 15:10:20.877039   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:10:20.883882   20887 kubeadm.go:395] StartCluster: {Name:multinode-20220728150610-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728150610-12923 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logvie
wer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:10:20.883989   20887 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:10:20.912948   20887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:10:20.919832   20887 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0728 15:10:20.919844   20887 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0728 15:10:20.919849   20887 command_runner.go:130] > /var/lib/minikube/etcd:
	I0728 15:10:20.919867   20887 command_runner.go:130] > member
	I0728 15:10:20.920435   20887 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 15:10:20.920449   20887 kubeadm.go:626] restartCluster start
	I0728 15:10:20.920496   20887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 15:10:20.927238   20887 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:20.927291   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:20.990032   20887 kubeconfig.go:116] verify returned: extract IP: "multinode-20220728150610-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:10:20.990114   20887 kubeconfig.go:127] "multinode-20220728150610-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 15:10:20.990343   20887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:10:20.990859   20887 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:10:20.991069   20887 kapi.go:59] client config for multinode-20220728150610-12923: &rest.Config{Host:"https://127.0.0.1:56607", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-2022072815061
0-12923/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:10:20.991384   20887 cert_rotation.go:137] Starting client certificate rotation controller
	I0728 15:10:20.991528   20887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 15:10:20.999043   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:20.999102   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:21.006905   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:21.207372   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:21.207500   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:21.217954   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:21.407555   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:21.407671   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:21.419369   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:21.608154   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:21.608389   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:21.618943   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:21.809037   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:21.809252   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:21.819597   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:22.007026   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:22.007216   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:22.017034   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:22.209053   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:22.209191   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:22.219642   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:22.407458   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:22.407668   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:22.418267   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:22.609156   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:22.609335   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:22.619763   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:22.808073   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:22.808167   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:22.818864   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:23.009059   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:23.009211   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:23.019328   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:23.207047   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:23.207253   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:23.217269   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:23.407360   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:23.407458   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:23.417405   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:23.607117   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:23.607247   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:23.616680   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:23.809085   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:23.809251   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:23.819674   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:24.009110   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:24.009248   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:24.019592   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:24.019601   20887 api_server.go:165] Checking apiserver status ...
	I0728 15:10:24.019654   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:10:24.027532   20887 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:24.027545   20887 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 15:10:24.027551   20887 kubeadm.go:1092] stopping kube-system containers ...
	I0728 15:10:24.027610   20887 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:10:24.054242   20887 command_runner.go:130] > 89466f3f8306
	I0728 15:10:24.054253   20887 command_runner.go:130] > 765d6b79e654
	I0728 15:10:24.054257   20887 command_runner.go:130] > ece8e7f7eb66
	I0728 15:10:24.054260   20887 command_runner.go:130] > 50a595b77903
	I0728 15:10:24.054263   20887 command_runner.go:130] > 7b9caab60a97
	I0728 15:10:24.054266   20887 command_runner.go:130] > 0d0894f41f2a
	I0728 15:10:24.054269   20887 command_runner.go:130] > 848acc25a7d7
	I0728 15:10:24.054274   20887 command_runner.go:130] > 4a96e7ffb1b4
	I0728 15:10:24.054279   20887 command_runner.go:130] > 8e2030fdbc79
	I0728 15:10:24.054290   20887 command_runner.go:130] > abab41f9a904
	I0728 15:10:24.054294   20887 command_runner.go:130] > 9db2ba48d7a6
	I0728 15:10:24.054297   20887 command_runner.go:130] > 06994bc702bb
	I0728 15:10:24.054300   20887 command_runner.go:130] > 3641ce6d4a53
	I0728 15:10:24.054304   20887 command_runner.go:130] > bb142f1efac9
	I0728 15:10:24.054306   20887 command_runner.go:130] > 21e11a020b83
	I0728 15:10:24.054311   20887 command_runner.go:130] > e71a37402f1e
	I0728 15:10:24.057234   20887 docker.go:443] Stopping containers: [89466f3f8306 765d6b79e654 ece8e7f7eb66 50a595b77903 7b9caab60a97 0d0894f41f2a 848acc25a7d7 4a96e7ffb1b4 8e2030fdbc79 abab41f9a904 9db2ba48d7a6 06994bc702bb 3641ce6d4a53 bb142f1efac9 21e11a020b83 e71a37402f1e]
	I0728 15:10:24.057306   20887 ssh_runner.go:195] Run: docker stop 89466f3f8306 765d6b79e654 ece8e7f7eb66 50a595b77903 7b9caab60a97 0d0894f41f2a 848acc25a7d7 4a96e7ffb1b4 8e2030fdbc79 abab41f9a904 9db2ba48d7a6 06994bc702bb 3641ce6d4a53 bb142f1efac9 21e11a020b83 e71a37402f1e
	I0728 15:10:24.087008   20887 command_runner.go:130] > 89466f3f8306
	I0728 15:10:24.087022   20887 command_runner.go:130] > 765d6b79e654
	I0728 15:10:24.087026   20887 command_runner.go:130] > ece8e7f7eb66
	I0728 15:10:24.087029   20887 command_runner.go:130] > 50a595b77903
	I0728 15:10:24.087032   20887 command_runner.go:130] > 7b9caab60a97
	I0728 15:10:24.087038   20887 command_runner.go:130] > 0d0894f41f2a
	I0728 15:10:24.087042   20887 command_runner.go:130] > 848acc25a7d7
	I0728 15:10:24.087046   20887 command_runner.go:130] > 4a96e7ffb1b4
	I0728 15:10:24.087049   20887 command_runner.go:130] > 8e2030fdbc79
	I0728 15:10:24.087052   20887 command_runner.go:130] > abab41f9a904
	I0728 15:10:24.087056   20887 command_runner.go:130] > 9db2ba48d7a6
	I0728 15:10:24.087059   20887 command_runner.go:130] > 06994bc702bb
	I0728 15:10:24.087063   20887 command_runner.go:130] > 3641ce6d4a53
	I0728 15:10:24.087066   20887 command_runner.go:130] > bb142f1efac9
	I0728 15:10:24.087069   20887 command_runner.go:130] > 21e11a020b83
	I0728 15:10:24.087073   20887 command_runner.go:130] > e71a37402f1e
	I0728 15:10:24.087126   20887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 15:10:24.097123   20887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:10:24.103794   20887 command_runner.go:130] > -rw------- 1 root root 5643 Jul 28 22:06 /etc/kubernetes/admin.conf
	I0728 15:10:24.103819   20887 command_runner.go:130] > -rw------- 1 root root 5656 Jul 28 22:06 /etc/kubernetes/controller-manager.conf
	I0728 15:10:24.103832   20887 command_runner.go:130] > -rw------- 1 root root 2059 Jul 28 22:06 /etc/kubernetes/kubelet.conf
	I0728 15:10:24.103849   20887 command_runner.go:130] > -rw------- 1 root root 5600 Jul 28 22:06 /etc/kubernetes/scheduler.conf
	I0728 15:10:24.104519   20887 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 28 22:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 28 22:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2059 Jul 28 22:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 28 22:06 /etc/kubernetes/scheduler.conf
	
	I0728 15:10:24.104568   20887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 15:10:24.111292   20887 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0728 15:10:24.111910   20887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 15:10:24.118662   20887 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I0728 15:10:24.119489   20887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 15:10:24.126384   20887 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:24.126429   20887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 15:10:24.133425   20887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 15:10:24.140649   20887 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:10:24.140700   20887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 15:10:24.147327   20887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:10:24.154358   20887 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 15:10:24.154370   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:10:24.193832   20887 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0728 15:10:24.193844   20887 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0728 15:10:24.194373   20887 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0728 15:10:24.194669   20887 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0728 15:10:24.195001   20887 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0728 15:10:24.195324   20887 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0728 15:10:24.195475   20887 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0728 15:10:24.195896   20887 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0728 15:10:24.196378   20887 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0728 15:10:24.196727   20887 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0728 15:10:24.197009   20887 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0728 15:10:24.197373   20887 command_runner.go:130] > [certs] Using the existing "sa" key
	I0728 15:10:24.200570   20887 command_runner.go:130] ! W0728 22:10:24.193675    1084 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:10:24.200588   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:10:24.239367   20887 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0728 15:10:24.372852   20887 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0728 15:10:24.637750   20887 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0728 15:10:24.924617   20887 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0728 15:10:25.188133   20887 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0728 15:10:25.192024   20887 command_runner.go:130] ! W0728 22:10:24.238810    1094 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:10:25.192054   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:10:25.287921   20887 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0728 15:10:25.288969   20887 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0728 15:10:25.288977   20887 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0728 15:10:25.362289   20887 command_runner.go:130] ! W0728 22:10:25.231047    1117 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:10:25.362321   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:10:25.398362   20887 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0728 15:10:25.398375   20887 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0728 15:10:25.402342   20887 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0728 15:10:25.403201   20887 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0728 15:10:25.406925   20887 command_runner.go:130] ! W0728 22:10:25.398196    1161 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:10:25.406953   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:10:25.445694   20887 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0728 15:10:25.456452   20887 command_runner.go:130] ! W0728 22:10:25.446485    1174 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:10:25.456481   20887 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:10:25.456537   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:10:25.966687   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:10:26.466663   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:10:26.485240   20887 command_runner.go:130] > 1658
	I0728 15:10:26.485265   20887 api_server.go:71] duration metric: took 1.028783006s to wait for apiserver process to appear ...
	I0728 15:10:26.485285   20887 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:10:26.485308   20887 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0728 15:10:26.487124   20887 api_server.go:256] stopped: https://127.0.0.1:56607/healthz: Get "https://127.0.0.1:56607/healthz": EOF
	I0728 15:10:26.988185   20887 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0728 15:10:29.359701   20887 api_server.go:266] https://127.0.0.1:56607/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 15:10:29.359716   20887 api_server.go:102] status: https://127.0.0.1:56607/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 15:10:29.487341   20887 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0728 15:10:29.494429   20887 api_server.go:266] https://127.0.0.1:56607/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:10:29.494447   20887 api_server.go:102] status: https://127.0.0.1:56607/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:10:29.987374   20887 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0728 15:10:29.995683   20887 api_server.go:266] https://127.0.0.1:56607/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:10:29.995701   20887 api_server.go:102] status: https://127.0.0.1:56607/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:10:30.487290   20887 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0728 15:10:30.493116   20887 api_server.go:266] https://127.0.0.1:56607/healthz returned 200:
	ok
	I0728 15:10:30.493178   20887 round_trippers.go:463] GET https://127.0.0.1:56607/version
	I0728 15:10:30.493186   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:30.493194   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:30.493200   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:30.499482   20887 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0728 15:10:30.499493   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:30.499498   20887 round_trippers.go:580]     Audit-Id: 6fbd46fc-5a70-4aa3-a4c2-8eed5b981815
	I0728 15:10:30.499504   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:30.499508   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:30.499513   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:30.499518   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:30.499522   20887 round_trippers.go:580]     Content-Length: 263
	I0728 15:10:30.499527   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:30 GMT
	I0728 15:10:30.499545   20887 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "24",
	  "gitVersion": "v1.24.3",
	  "gitCommit": "aef86a93758dc3cb2c658dd9657ab4ad4afc21cb",
	  "gitTreeState": "clean",
	  "buildDate": "2022-07-13T14:23:26Z",
	  "goVersion": "go1.18.3",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0728 15:10:30.499591   20887 api_server.go:140] control plane version: v1.24.3
	I0728 15:10:30.499602   20887 api_server.go:130] duration metric: took 4.014306737s to wait for apiserver health ...
	I0728 15:10:30.499607   20887 cni.go:95] Creating CNI manager for ""
	I0728 15:10:30.499611   20887 cni.go:156] 3 nodes found, recommending kindnet
	I0728 15:10:30.520847   20887 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0728 15:10:30.542298   20887 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0728 15:10:30.547667   20887 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0728 15:10:30.547678   20887 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0728 15:10:30.547687   20887 command_runner.go:130] > Device: 8eh/142d	Inode: 267113      Links: 1
	I0728 15:10:30.547696   20887 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0728 15:10:30.547702   20887 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0728 15:10:30.547707   20887 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0728 15:10:30.547711   20887 command_runner.go:130] > Change: 2022-07-28 21:39:58.672799402 +0000
	I0728 15:10:30.547715   20887 command_runner.go:130] >  Birth: -
	I0728 15:10:30.547754   20887 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ...
	I0728 15:10:30.547761   20887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0728 15:10:30.561034   20887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0728 15:10:31.276669   20887 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0728 15:10:31.279451   20887 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0728 15:10:31.283993   20887 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0728 15:10:31.295794   20887 command_runner.go:130] > daemonset.apps/kindnet configured
	I0728 15:10:31.352052   20887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:10:31.352130   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:10:31.352137   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.352146   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.352154   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.356002   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:31.356021   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.356029   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.356039   20887 round_trippers.go:580]     Audit-Id: 00f61dd6-bf01-4638-b451-aeeec006f1d1
	I0728 15:10:31.356045   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.356052   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.356062   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.356072   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.357186   20887 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"691"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"411","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},
"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".": [truncated 83373 chars]
	I0728 15:10:31.360127   20887 system_pods.go:59] 12 kube-system pods found
	I0728 15:10:31.360143   20887 system_pods.go:61] "coredns-6d4b75cb6d-dfxk7" [ea8a6018-c281-45ec-bbb7-19f2988aa884] Running
	I0728 15:10:31.360147   20887 system_pods.go:61] "etcd-multinode-20220728150610-12923" [2d4683e7-2e93-41c5-af51-5181a7c29edd] Running
	I0728 15:10:31.360151   20887 system_pods.go:61] "kindnet-52mvf" [ef5b2400-09e0-4d0c-98b9-d520fd42e827] Running
	I0728 15:10:31.360154   20887 system_pods.go:61] "kindnet-tlp2m" [b535556f-fbbe-4220-9037-5016f1b8fb51] Running
	I0728 15:10:31.360157   20887 system_pods.go:61] "kindnet-v5hq8" [3410f1c1-9947-4a08-8503-660caf65dc5c] Running
	I0728 15:10:31.360162   20887 system_pods.go:61] "kube-apiserver-multinode-20220728150610-12923" [34425f5f-5cbc-4e7c-89b3-e4758c44f162] Running
	I0728 15:10:31.360165   20887 system_pods.go:61] "kube-controller-manager-multinode-20220728150610-12923" [92841ab9-f773-435e-a133-794e0d8e0cef] Running
	I0728 15:10:31.360168   20887 system_pods.go:61] "kube-proxy-bxdk6" [befca8fa-aef6-415a-b033-8522067db320] Running
	I0728 15:10:31.360171   20887 system_pods.go:61] "kube-proxy-cdz7z" [f9727653-ed51-43f3-95ad-fd2f5fb0ac6e] Running
	I0728 15:10:31.360174   20887 system_pods.go:61] "kube-proxy-cn9x2" [813dc8a0-2ea3-4ee9-83ce-fe09ccf38295] Running
	I0728 15:10:31.360179   20887 system_pods.go:61] "kube-scheduler-multinode-20220728150610-12923" [ef5d84ce-4249-4af0-b1be-7a3d7f8c2205] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 15:10:31.360186   20887 system_pods.go:61] "storage-provisioner" [29238934-2c0b-4262-80ff-12975d44a715] Running
	I0728 15:10:31.360189   20887 system_pods.go:74] duration metric: took 8.125685ms to wait for pod list to return data ...
	I0728 15:10:31.360196   20887 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:10:31.360228   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes
	I0728 15:10:31.360232   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.360238   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.360243   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.362649   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:31.362662   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.362670   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.362697   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.362724   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.362736   20887 round_trippers.go:580]     Audit-Id: 99e2d4b1-bb2e-4f02-affc-de2fc524e449
	I0728 15:10:31.362743   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.362750   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.363193   20887 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"691"},"items":[{"metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-m
anaged-attach-detach":"true"},"managedFields":[{"manager":"kubelet","op [truncated 16208 chars]
	I0728 15:10:31.363825   20887 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:10:31.363837   20887 node_conditions.go:123] node cpu capacity is 6
	I0728 15:10:31.363848   20887 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:10:31.363852   20887 node_conditions.go:123] node cpu capacity is 6
	I0728 15:10:31.363855   20887 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:10:31.363859   20887 node_conditions.go:123] node cpu capacity is 6
	I0728 15:10:31.363862   20887 node_conditions.go:105] duration metric: took 3.663155ms to run NodePressure ...
	I0728 15:10:31.363874   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:10:31.568994   20887 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0728 15:10:31.669716   20887 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0728 15:10:31.676783   20887 command_runner.go:130] ! W0728 22:10:31.416577    2222 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:10:31.676806   20887 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0728 15:10:31.676865   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0728 15:10:31.676870   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.676877   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.676883   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.680864   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:31.680877   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.680883   20887 round_trippers.go:580]     Audit-Id: a8ee54f4-126d-4750-876c-535e354a839b
	I0728 15:10:31.680887   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.680892   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.680897   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.680901   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.680905   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.681089   20887 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"693"},"items":[{"metadata":{"name":"etcd-multinode-20220728150610-12923","namespace":"kube-system","uid":"2d4683e7-2e93-41c5-af51-5181a7c29edd","resourceVersion":"319","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"86483540902094820b1a0cc29f31f22f","kubernetes.io/config.mirror":"86483540902094820b1a0cc29f31f22f","kubernetes.io/config.seen":"2022-07-28T22:06:37.255020292Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time"
:"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata [truncated 30755 chars]
	I0728 15:10:31.681831   20887 kubeadm.go:777] kubelet initialised
	I0728 15:10:31.681839   20887 kubeadm.go:778] duration metric: took 5.018574ms waiting for restarted kubelet to initialise ...
	I0728 15:10:31.681846   20887 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:10:31.681877   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:10:31.681880   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.681886   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.681892   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.685268   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:31.685284   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.685292   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.685300   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.685319   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.685333   20887 round_trippers.go:580]     Audit-Id: 62e9b6ef-3db6-474b-9b59-cc489e919cc2
	I0728 15:10:31.685342   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.685348   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.687157   20887 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"693"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"411","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},
"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".": [truncated 83840 chars]
	I0728 15:10:31.689057   20887 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.689106   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:31.689110   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.689116   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.689122   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.692042   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:31.692055   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.692061   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.692066   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.692070   20887 round_trippers.go:580]     Audit-Id: 37604e65-07bb-46b1-9c4a-2ba090b8e744
	I0728 15:10:31.692075   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.692079   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.692085   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.692145   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"411","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 5982 chars]
	I0728 15:10:31.692410   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:31.692418   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.692427   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.692442   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.694782   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:31.694807   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.694816   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.694823   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.694831   20887 round_trippers.go:580]     Audit-Id: 8e58f15b-c0ca-421c-99c9-35d8fa67519d
	I0728 15:10:31.694846   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.694862   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.694871   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.694958   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:31.695169   20887 pod_ready.go:92] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:31.695179   20887 pod_ready.go:81] duration metric: took 6.109081ms waiting for pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.695187   20887 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.695222   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/etcd-multinode-20220728150610-12923
	I0728 15:10:31.695229   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.695235   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.695243   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.697682   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:31.697695   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.697701   20887 round_trippers.go:580]     Audit-Id: e0f33fc5-f37d-4352-9d28-292ba24301c3
	I0728 15:10:31.697706   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.697710   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.697716   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.697720   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.697725   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.697783   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220728150610-12923","namespace":"kube-system","uid":"2d4683e7-2e93-41c5-af51-5181a7c29edd","resourceVersion":"319","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"86483540902094820b1a0cc29f31f22f","kubernetes.io/config.mirror":"86483540902094820b1a0cc29f31f22f","kubernetes.io/config.seen":"2022-07-28T22:06:37.255020292Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fi
eldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io [truncated 5843 chars]
	I0728 15:10:31.698038   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:31.698044   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.698050   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.698057   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.701269   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:31.701283   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.701289   20887 round_trippers.go:580]     Audit-Id: 3a2fb442-6199-423b-bc4b-dc2100732ffd
	I0728 15:10:31.701293   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.701298   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.701305   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.701312   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.701319   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.701375   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:31.701591   20887 pod_ready.go:92] pod "etcd-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:31.701600   20887 pod_ready.go:81] duration metric: took 6.4065ms waiting for pod "etcd-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.701611   20887 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.701645   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220728150610-12923
	I0728 15:10:31.701650   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.701655   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.701661   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.704654   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:31.704665   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.704671   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.704676   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.704683   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.704689   20887 round_trippers.go:580]     Audit-Id: 24cf915d-9029-4a3a-aa2a-75b2690c4ec4
	I0728 15:10:31.704693   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.704699   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.704758   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220728150610-12923","namespace":"kube-system","uid":"34425f5f-5cbc-4e7c-89b3-e4758c44f162","resourceVersion":"281","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"f17cd4a02884221436b424aa5c9008ee","kubernetes.io/config.mirror":"f17cd4a02884221436b424aa5c9008ee","kubernetes.io/config.seen":"2022-07-28T22:06:37.255021189Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z",
"fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".": [truncated 8310 chars]
	I0728 15:10:31.705043   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:31.705050   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.705056   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.705061   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.709251   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:31.709263   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.709269   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.709273   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.709278   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.709282   20887 round_trippers.go:580]     Audit-Id: 5807c750-54cb-454d-a8d7-52cffd414413
	I0728 15:10:31.709287   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.709292   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.709504   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:31.709717   20887 pod_ready.go:92] pod "kube-apiserver-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:31.709726   20887 pod_ready.go:81] duration metric: took 8.109388ms waiting for pod "kube-apiserver-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.709734   20887 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:31.709776   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:31.709784   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.709793   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.709800   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.753692   20887 round_trippers.go:574] Response Status: 200 OK in 43 milliseconds
	I0728 15:10:31.753723   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.753741   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.753763   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.753786   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.753798   20887 round_trippers.go:580]     Audit-Id: ab7bdc65-a664-43bf-bfb9-a52c9c3c8a63
	I0728 15:10:31.753817   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.753837   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.754758   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:31.755236   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:31.755246   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:31.755254   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:31.755305   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:31.759738   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:31.759752   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:31.759758   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:31.759765   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:31.759771   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:31.759776   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:31.759781   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:31 GMT
	I0728 15:10:31.759786   20887 round_trippers.go:580]     Audit-Id: c1f9fda3-fa61-4aed-917c-361d01e05d00
	I0728 15:10:31.759840   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:32.261623   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:32.261644   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:32.261656   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:32.261666   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:32.265093   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:32.265118   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:32.265130   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:32 GMT
	I0728 15:10:32.265144   20887 round_trippers.go:580]     Audit-Id: c093c7ec-ac70-463f-b163-2abf7436d48d
	I0728 15:10:32.265151   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:32.265157   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:32.265167   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:32.265174   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:32.265458   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:32.265743   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:32.265749   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:32.265755   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:32.265761   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:32.267930   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:32.267941   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:32.267950   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:32.267955   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:32.267960   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:32.267964   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:32 GMT
	I0728 15:10:32.267969   20887 round_trippers.go:580]     Audit-Id: 75c0f484-c374-47c3-ab0b-555277c52548
	I0728 15:10:32.267973   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:32.268799   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:32.761227   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:32.761254   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:32.761266   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:32.761276   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:32.765662   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:32.765677   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:32.765701   20887 round_trippers.go:580]     Audit-Id: 9cfe05eb-92d4-4131-aaeb-cc5ca2ba84bf
	I0728 15:10:32.765706   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:32.765711   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:32.765716   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:32.765721   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:32.765725   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:32 GMT
	I0728 15:10:32.765796   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:32.766103   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:32.766110   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:32.766115   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:32.766121   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:32.767765   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:32.767780   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:32.767793   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:32.767803   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:32.767810   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:32 GMT
	I0728 15:10:32.767819   20887 round_trippers.go:580]     Audit-Id: a30f8ace-f8a7-4f3c-960e-53eb105eb364
	I0728 15:10:32.767825   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:32.767833   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:32.768097   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:33.261918   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:33.261938   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:33.261950   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:33.261960   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:33.265834   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:33.265849   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:33.265857   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:33.265863   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:33 GMT
	I0728 15:10:33.265871   20887 round_trippers.go:580]     Audit-Id: 620b0f05-7cbe-4d99-9496-e17f80c3f1e2
	I0728 15:10:33.265877   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:33.265884   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:33.265890   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:33.265972   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:33.266299   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:33.266305   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:33.266311   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:33.266316   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:33.268509   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:33.268519   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:33.268526   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:33.268532   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:33 GMT
	I0728 15:10:33.268536   20887 round_trippers.go:580]     Audit-Id: c903f6be-cbe7-4db0-9f1a-92df74e99c36
	I0728 15:10:33.268541   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:33.268546   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:33.268550   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:33.268598   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:33.760686   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:33.760707   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:33.760719   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:33.760729   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:33.764510   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:33.764520   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:33.764526   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:33.764531   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:33.764535   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:33 GMT
	I0728 15:10:33.764549   20887 round_trippers.go:580]     Audit-Id: 22d6312f-3fac-46db-b008-757b79f23127
	I0728 15:10:33.764554   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:33.764561   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:33.764953   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:33.765254   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:33.765260   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:33.765266   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:33.765271   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:33.767129   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:33.767138   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:33.767143   20887 round_trippers.go:580]     Audit-Id: f873cb6f-bd0d-4c8f-ad2a-a8d1f430e215
	I0728 15:10:33.767148   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:33.767153   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:33.767158   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:33.767165   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:33.767175   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:33 GMT
	I0728 15:10:33.767454   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:33.767643   20887 pod_ready.go:102] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:34.261518   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:34.261540   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:34.261553   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:34.261564   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:34.265390   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:34.265406   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:34.265414   20887 round_trippers.go:580]     Audit-Id: 183568fa-2c80-4fa8-99cb-fae4a4ed516d
	I0728 15:10:34.265420   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:34.265426   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:34.265433   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:34.265439   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:34.265445   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:34 GMT
	I0728 15:10:34.265524   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:34.267000   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:34.267024   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:34.267057   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:34.267197   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:34.269032   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:34.269041   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:34.269046   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:34.269050   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:34.269055   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:34.269060   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:34.269065   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:34 GMT
	I0728 15:10:34.269069   20887 round_trippers.go:580]     Audit-Id: 39cac272-2d8d-4b0b-9afc-cb2d89210379
	I0728 15:10:34.269111   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:34.760554   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:34.760570   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:34.760577   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:34.760582   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:34.763313   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:34.763326   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:34.763332   20887 round_trippers.go:580]     Audit-Id: 5199507f-c112-4185-8a5a-46992e91b5b4
	I0728 15:10:34.763341   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:34.763347   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:34.763351   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:34.763356   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:34.763361   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:34 GMT
	I0728 15:10:34.763430   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:34.763723   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:34.763730   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:34.763736   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:34.763742   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:34.765762   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:34.765771   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:34.765778   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:34.765783   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:34.765788   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:34 GMT
	I0728 15:10:34.765793   20887 round_trippers.go:580]     Audit-Id: e7cbc4e5-7086-406b-b00d-866bf15ed69e
	I0728 15:10:34.765798   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:34.765803   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:34.765861   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:35.262169   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:35.262195   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:35.262207   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:35.262218   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:35.266317   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:35.266332   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:35.266340   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:35.266347   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:35 GMT
	I0728 15:10:35.266354   20887 round_trippers.go:580]     Audit-Id: 51b10cbe-a099-4ae1-a2ac-4f69f131e42a
	I0728 15:10:35.266360   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:35.266366   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:35.266375   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:35.266450   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:35.266727   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:35.266733   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:35.266739   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:35.266744   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:35.268577   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:35.268585   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:35.268590   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:35.268597   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:35.268607   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:35 GMT
	I0728 15:10:35.268623   20887 round_trippers.go:580]     Audit-Id: 6a894c42-dfbf-4a9c-a9aa-4478929778fa
	I0728 15:10:35.268634   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:35.268665   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:35.268908   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:35.760463   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:35.760485   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:35.760497   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:35.760507   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:35.764570   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:35.764587   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:35.764598   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:35.764607   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:35 GMT
	I0728 15:10:35.764615   20887 round_trippers.go:580]     Audit-Id: 677f457b-4fc3-49ba-a446-cf72f491f4f3
	I0728 15:10:35.764625   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:35.764633   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:35.764639   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:35.764712   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:35.764986   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:35.764994   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:35.765000   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:35.765008   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:35.766971   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:35.766980   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:35.766986   20887 round_trippers.go:580]     Audit-Id: 4ad89767-e54c-48d4-8b95-01a5b5b96325
	I0728 15:10:35.766993   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:35.766999   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:35.767005   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:35.767013   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:35.767019   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:35 GMT
	I0728 15:10:35.767144   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:36.260873   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:36.260894   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:36.260906   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:36.260915   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:36.264388   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:36.264407   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:36.264421   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:36.264434   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:36.264453   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:36.264464   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:36.264473   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:36 GMT
	I0728 15:10:36.264480   20887 round_trippers.go:580]     Audit-Id: 5df957a5-9aa6-420e-b575-a0124306c220
	I0728 15:10:36.264656   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:36.265027   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:36.265037   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:36.265045   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:36.265052   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:36.267191   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:36.267202   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:36.267210   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:36.267217   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:36.267222   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:36 GMT
	I0728 15:10:36.267229   20887 round_trippers.go:580]     Audit-Id: 073f3c6b-ebad-41b9-84de-90929d6959fc
	I0728 15:10:36.267234   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:36.267238   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:36.267294   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:36.267484   20887 pod_ready.go:102] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:36.761570   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:36.761590   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:36.761606   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:36.761617   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:36.765861   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:36.765890   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:36.765896   20887 round_trippers.go:580]     Audit-Id: 1efca274-96b6-4337-b334-a56863d2a131
	I0728 15:10:36.765900   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:36.765905   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:36.765909   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:36.765914   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:36.765918   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:36 GMT
	I0728 15:10:36.765997   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:36.766270   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:36.766276   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:36.766284   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:36.766290   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:36.768016   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:36.768026   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:36.768032   20887 round_trippers.go:580]     Audit-Id: 8263e809-97ab-41ab-a5a3-4d71fc4ab76a
	I0728 15:10:36.768037   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:36.768041   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:36.768046   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:36.768051   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:36.768056   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:36 GMT
	I0728 15:10:36.768109   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:37.260211   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:37.260224   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:37.260230   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:37.260235   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:37.262594   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:37.262606   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:37.262611   20887 round_trippers.go:580]     Audit-Id: 107272f5-40d3-4c19-a419-e583886f24e8
	I0728 15:10:37.262616   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:37.262623   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:37.262629   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:37.262634   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:37.262639   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:37 GMT
	I0728 15:10:37.262733   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:37.263085   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:37.263092   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:37.263098   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:37.263104   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:37.265022   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:37.265038   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:37.265044   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:37.265049   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:37.265053   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:37.265058   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:37 GMT
	I0728 15:10:37.265065   20887 round_trippers.go:580]     Audit-Id: 19d85165-057c-436f-ba0d-6854fe197346
	I0728 15:10:37.265071   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:37.265121   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:37.760441   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:37.760463   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:37.760476   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:37.760486   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:37.764964   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:37.764977   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:37.764983   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:37.764988   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:37.764993   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:37 GMT
	I0728 15:10:37.764998   20887 round_trippers.go:580]     Audit-Id: 55bea9dc-69fb-4198-acd2-4a4e4ac5c756
	I0728 15:10:37.765003   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:37.765007   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:37.765090   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:37.765380   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:37.765387   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:37.765393   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:37.765398   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:37.767222   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:37.767231   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:37.767236   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:37.767242   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:37 GMT
	I0728 15:10:37.767246   20887 round_trippers.go:580]     Audit-Id: 1b40b039-85f0-4264-b49f-07fe55f03ace
	I0728 15:10:37.767251   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:37.767255   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:37.767260   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:37.767305   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:38.260565   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:38.260585   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:38.260598   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:38.260608   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:38.264461   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:38.264479   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:38.264489   20887 round_trippers.go:580]     Audit-Id: 1bdcdbba-6073-4f99-921c-813ac4e1757a
	I0728 15:10:38.264499   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:38.264508   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:38.264515   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:38.264521   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:38.264526   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:38 GMT
	I0728 15:10:38.265093   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:38.265394   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:38.265401   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:38.265407   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:38.265413   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:38.267267   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:38.267278   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:38.267285   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:38.267290   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:38.267299   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:38.267304   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:38.267309   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:38 GMT
	I0728 15:10:38.267313   20887 round_trippers.go:580]     Audit-Id: ae5f0c88-3b5c-4587-bd74-4aa339f01e38
	I0728 15:10:38.267620   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:38.267801   20887 pod_ready.go:102] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:38.761718   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:38.761738   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:38.761750   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:38.761761   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:38.766233   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:38.766245   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:38.766251   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:38 GMT
	I0728 15:10:38.766255   20887 round_trippers.go:580]     Audit-Id: 89894f36-e222-4441-80f5-c67e2ac4e6d5
	I0728 15:10:38.766260   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:38.766265   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:38.766270   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:38.766276   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:38.766338   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:38.766614   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:38.766621   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:38.766627   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:38.766633   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:38.768463   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:38.768473   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:38.768480   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:38.768484   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:38.768489   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:38.768495   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:38 GMT
	I0728 15:10:38.768499   20887 round_trippers.go:580]     Audit-Id: 1fc846b9-1375-49e9-bf89-27c2439c0dcb
	I0728 15:10:38.768509   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:38.768792   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:39.260347   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:39.260369   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:39.260382   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:39.260392   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:39.264749   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:39.264764   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:39.264772   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:39 GMT
	I0728 15:10:39.264777   20887 round_trippers.go:580]     Audit-Id: 4cc281ca-e737-4951-a75b-0423e9fcc720
	I0728 15:10:39.264781   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:39.264786   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:39.264790   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:39.264794   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:39.264855   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:39.265163   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:39.265170   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:39.265176   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:39.265181   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:39.267056   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:39.267065   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:39.267070   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:39 GMT
	I0728 15:10:39.267075   20887 round_trippers.go:580]     Audit-Id: 9819eb6f-0da9-44f6-b73f-93419d35783d
	I0728 15:10:39.267081   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:39.267085   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:39.267090   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:39.267094   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:39.267143   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:39.760262   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:39.760275   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:39.760281   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:39.760286   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:39.762964   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:39.762973   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:39.762979   20887 round_trippers.go:580]     Audit-Id: 34c971e5-f5e0-4b8a-91c5-0228cf20c5f2
	I0728 15:10:39.762983   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:39.762988   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:39.762993   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:39.762997   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:39.763001   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:39 GMT
	I0728 15:10:39.763055   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:39.763335   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:39.763341   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:39.763347   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:39.763352   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:39.764935   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:39.764944   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:39.764950   20887 round_trippers.go:580]     Audit-Id: 11e4902e-f9fa-4b0c-9d97-b660a6f1fb6e
	I0728 15:10:39.764957   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:39.764965   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:39.764970   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:39.764975   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:39.764980   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:39 GMT
	I0728 15:10:39.765239   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:40.260510   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:40.260533   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:40.260544   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:40.260554   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:40.263829   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:40.263839   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:40.263845   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:40.263850   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:40.263855   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:40.263859   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:40 GMT
	I0728 15:10:40.263864   20887 round_trippers.go:580]     Audit-Id: 31c1b84e-36ed-407f-b416-7fb5167a0fd6
	I0728 15:10:40.263869   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:40.264253   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:40.264552   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:40.264562   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:40.264568   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:40.264574   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:40.266510   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:40.266518   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:40.266524   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:40 GMT
	I0728 15:10:40.266528   20887 round_trippers.go:580]     Audit-Id: 20c31cb5-8d4d-4ec3-9f88-ce1e534b5287
	I0728 15:10:40.266533   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:40.266538   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:40.266545   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:40.266551   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:40.266716   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:40.762055   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:40.762077   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:40.762090   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:40.762101   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:40.765894   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:40.765909   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:40.765917   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:40.765923   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:40.765929   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:40 GMT
	I0728 15:10:40.765936   20887 round_trippers.go:580]     Audit-Id: 546dcaee-7603-4ad8-8dd2-a353ed34d5c4
	I0728 15:10:40.765943   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:40.765950   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:40.766037   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:40.766396   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:40.766402   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:40.766408   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:40.766413   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:40.768459   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:40.768468   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:40.768473   20887 round_trippers.go:580]     Audit-Id: 185e3fdd-6e08-4a9f-b490-2c2ae3a0a826
	I0728 15:10:40.768478   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:40.768483   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:40.768487   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:40.768492   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:40.768497   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:40 GMT
	I0728 15:10:40.768541   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:40.768723   20887 pod_ready.go:102] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:41.260288   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:41.260308   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:41.260319   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:41.260329   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:41.263397   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:41.263409   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:41.263414   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:41.263419   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:41.263423   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:41.263428   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:41.263432   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:41 GMT
	I0728 15:10:41.263437   20887 round_trippers.go:580]     Audit-Id: 7ef9496c-bde5-457f-aea4-a6fd3e54cb77
	I0728 15:10:41.263607   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:41.263895   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:41.263901   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:41.263907   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:41.263912   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:41.268271   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:41.268282   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:41.268288   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:41.268292   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:41.268296   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:41.268301   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:41 GMT
	I0728 15:10:41.268305   20887 round_trippers.go:580]     Audit-Id: 2bebbf52-31ad-4954-9def-e6dc05c818dc
	I0728 15:10:41.268309   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:41.268354   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:41.760293   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:41.760306   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:41.760313   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:41.760317   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:41.762310   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:41.762321   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:41.762328   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:41.762340   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:41.762346   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:41 GMT
	I0728 15:10:41.762352   20887 round_trippers.go:580]     Audit-Id: 0d3a470f-139f-4726-b071-cd11ad183752
	I0728 15:10:41.762360   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:41.762365   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:41.762426   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:41.762718   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:41.762725   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:41.762731   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:41.762737   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:41.764906   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:41.764917   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:41.764923   20887 round_trippers.go:580]     Audit-Id: 4b119874-3e41-4874-9906-09cd53ca8abc
	I0728 15:10:41.764928   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:41.764932   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:41.764937   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:41.764942   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:41.764947   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:41 GMT
	I0728 15:10:41.764999   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:42.260145   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:42.260161   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:42.260172   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:42.260178   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:42.262421   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:42.262431   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:42.262438   20887 round_trippers.go:580]     Audit-Id: 99cd325c-24b9-4cd5-8538-5b043d4f9519
	I0728 15:10:42.262442   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:42.262448   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:42.262452   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:42.262457   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:42.262463   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:42 GMT
	I0728 15:10:42.262522   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:42.262798   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:42.262805   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:42.262810   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:42.262815   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:42.264642   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:42.264651   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:42.264658   20887 round_trippers.go:580]     Audit-Id: fe1119d5-bf81-4f81-a3ae-809bc38953e8
	I0728 15:10:42.264663   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:42.264668   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:42.264673   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:42.264677   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:42.264682   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:42 GMT
	I0728 15:10:42.264729   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:42.760577   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:42.760599   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:42.760638   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:42.760647   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:42.764005   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:42.764018   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:42.764024   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:42.764029   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:42.764034   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:42.764038   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:42.764043   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:42 GMT
	I0728 15:10:42.764048   20887 round_trippers.go:580]     Audit-Id: 9522f679-a4e0-4ace-8e63-eb68b51e9cd4
	I0728 15:10:42.764111   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:42.764391   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:42.764397   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:42.764403   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:42.764407   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:42.766158   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:42.766167   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:42.766173   20887 round_trippers.go:580]     Audit-Id: 376884b1-4498-4a6e-a910-d0a1c9642c16
	I0728 15:10:42.766178   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:42.766183   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:42.766188   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:42.766194   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:42.766198   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:42 GMT
	I0728 15:10:42.766242   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:43.260319   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:43.260392   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:43.260403   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:43.260410   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:43.263077   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:43.263087   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:43.263093   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:43.263109   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:43.263117   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:43 GMT
	I0728 15:10:43.263122   20887 round_trippers.go:580]     Audit-Id: adc14e05-1ea8-48a5-a6be-32edc9b0d323
	I0728 15:10:43.263126   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:43.263133   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:43.263191   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:43.263466   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:43.263472   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:43.263478   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:43.263483   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:43.265360   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:43.265370   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:43.265376   20887 round_trippers.go:580]     Audit-Id: 3c70ca6f-608c-4122-b6ba-887a8f54397c
	I0728 15:10:43.265381   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:43.265386   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:43.265391   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:43.265395   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:43.265399   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:43 GMT
	I0728 15:10:43.265442   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:43.265622   20887 pod_ready.go:102] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:43.760526   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:43.760546   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:43.760559   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:43.760568   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:43.764934   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:43.764947   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:43.764952   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:43.764957   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:43.764961   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:43.764966   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:43 GMT
	I0728 15:10:43.764970   20887 round_trippers.go:580]     Audit-Id: 64e7eb15-d054-4789-9267-3913bc178aa3
	I0728 15:10:43.764975   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:43.765037   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:43.765315   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:43.765322   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:43.765327   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:43.765332   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:43.767218   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:43.767232   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:43.767243   20887 round_trippers.go:580]     Audit-Id: a2c14bbd-33d0-4c6a-988e-f06b17ebef0a
	I0728 15:10:43.767249   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:43.767254   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:43.767259   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:43.767264   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:43.767269   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:43 GMT
	I0728 15:10:43.767316   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:44.260257   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:44.260276   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:44.260285   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:44.260292   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:44.263075   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:44.263086   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:44.263091   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:44.263112   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:44.263120   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:44 GMT
	I0728 15:10:44.263125   20887 round_trippers.go:580]     Audit-Id: 350a3cfb-4492-408d-901e-ed1c90b65f45
	I0728 15:10:44.263129   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:44.263134   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:44.263190   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:44.263466   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:44.263472   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:44.263478   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:44.263483   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:44.265242   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:44.265255   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:44.265264   20887 round_trippers.go:580]     Audit-Id: 78393d54-4057-4ba2-bc12-1d9a27f67bc0
	I0728 15:10:44.265272   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:44.265279   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:44.265286   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:44.265290   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:44.265297   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:44 GMT
	I0728 15:10:44.265704   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:44.760292   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:44.760318   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:44.760330   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:44.760340   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:44.763840   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:44.763853   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:44.763859   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:44.763863   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:44 GMT
	I0728 15:10:44.763873   20887 round_trippers.go:580]     Audit-Id: 7411ab8f-6363-4449-b1fd-eff3befe1225
	I0728 15:10:44.763877   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:44.763882   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:44.763886   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:44.764178   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"693","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8352 chars]
	I0728 15:10:44.764582   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:44.764591   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:44.764597   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:44.764602   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:44.766568   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:44.766581   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:44.766594   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:44.766599   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:44 GMT
	I0728 15:10:44.766604   20887 round_trippers.go:580]     Audit-Id: ff48eae8-bd21-441d-b53e-14eef268f914
	I0728 15:10:44.766609   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:44.766613   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:44.766618   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:44.766801   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:45.260244   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:10:45.260261   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.260270   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.260278   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.262774   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:45.262784   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.262790   20887 round_trippers.go:580]     Audit-Id: 2fd6f01e-c019-4aab-8309-950a00f73440
	I0728 15:10:45.262794   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.262798   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.262802   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.262807   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.262812   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.263014   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"778","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8090 chars]
	I0728 15:10:45.263294   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:45.263301   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.263306   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.263312   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.265004   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.265013   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.265018   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.265022   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.265027   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.265032   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.265037   20887 round_trippers.go:580]     Audit-Id: fa4eede5-189c-47ff-ba76-8f675fde9392
	I0728 15:10:45.265042   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.265394   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:45.265577   20887 pod_ready.go:92] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:45.265589   20887 pod_ready.go:81] duration metric: took 13.555908334s waiting for pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.265597   20887 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bxdk6" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.265626   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-bxdk6
	I0728 15:10:45.265632   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.265645   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.265651   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.267540   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.267549   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.267554   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.267560   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.267564   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.267569   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.267574   20887 round_trippers.go:580]     Audit-Id: 7b319a0e-7e40-4dc7-956c-15ab92fc7fa4
	I0728 15:10:45.267579   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.267617   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bxdk6","generateName":"kube-proxy-","namespace":"kube-system","uid":"befca8fa-aef6-415a-b033-8522067db320","resourceVersion":"474","creationTimestamp":"2022-07-28T22:07:45Z","labels":{"controller-revision-hash":"94985b49","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:07:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5548 chars]
	I0728 15:10:45.267844   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923-m02
	I0728 15:10:45.267851   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.267856   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.267861   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.269411   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.269420   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.269425   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.269430   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.269435   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.269439   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.269444   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.269448   20887 round_trippers.go:580]     Audit-Id: 500a00e3-3071-4921-93b0-dc39a3dd37a0
	I0728 15:10:45.269699   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923-m02","uid":"2d4cf78c-ed12-4ef8-9967-e85ae2ffd232","resourceVersion":"556","creationTimestamp":"2022-07-28T22:07:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:07:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:07:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4523 chars]
	I0728 15:10:45.269858   20887 pod_ready.go:92] pod "kube-proxy-bxdk6" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:45.269864   20887 pod_ready.go:81] duration metric: took 4.261143ms waiting for pod "kube-proxy-bxdk6" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.269869   20887 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cdz7z" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.269891   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-cdz7z
	I0728 15:10:45.269895   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.269901   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.269906   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.271646   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.271655   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.271660   20887 round_trippers.go:580]     Audit-Id: 7eae9957-76b9-4857-a730-2433bb623c68
	I0728 15:10:45.271664   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.271670   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.271677   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.271682   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.271687   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.271728   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cdz7z","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9727653-ed51-43f3-95ad-fd2f5fb0ac6e","resourceVersion":"704","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"controller-revision-hash":"94985b49","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5747 chars]
	I0728 15:10:45.271953   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:45.271960   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.271965   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.271971   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.273648   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.273658   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.273665   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.273671   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.273676   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.273680   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.273685   20887 round_trippers.go:580]     Audit-Id: d5b6380f-22e8-4b04-96a0-acf892f973e7
	I0728 15:10:45.273689   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.273940   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:45.274120   20887 pod_ready.go:92] pod "kube-proxy-cdz7z" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:45.274127   20887 pod_ready.go:81] duration metric: took 4.253147ms waiting for pod "kube-proxy-cdz7z" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.274132   20887 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cn9x2" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.274155   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-cn9x2
	I0728 15:10:45.274159   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.274165   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.274170   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.275827   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.275836   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.275841   20887 round_trippers.go:580]     Audit-Id: f2dd8787-5605-478d-a9e0-990d788947e4
	I0728 15:10:45.275845   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.275850   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.275854   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.275858   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.275863   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.276088   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cn9x2","generateName":"kube-proxy-","namespace":"kube-system","uid":"813dc8a0-2ea3-4ee9-83ce-fe09ccf38295","resourceVersion":"671","creationTimestamp":"2022-07-28T22:08:39Z","labels":{"controller-revision-hash":"94985b49","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:08:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5755 chars]
	I0728 15:10:45.276313   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923-m03
	I0728 15:10:45.276320   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.276326   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.276332   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.277836   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.277845   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.277850   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.277855   20887 round_trippers.go:580]     Audit-Id: 2ff6c7e6-3fd1-44f4-8676-5b4e545ffad4
	I0728 15:10:45.277860   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.277865   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.277870   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.277875   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.278084   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923-m03","uid":"705fe4c5-d194-48b6-83d4-926ad5fead86","resourceVersion":"686","creationTimestamp":"2022-07-28T22:09:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:09:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:09:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4340 chars]
	I0728 15:10:45.278242   20887 pod_ready.go:92] pod "kube-proxy-cn9x2" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:45.278248   20887 pod_ready.go:81] duration metric: took 4.111555ms waiting for pod "kube-proxy-cn9x2" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.278253   20887 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.278278   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220728150610-12923
	I0728 15:10:45.278282   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.278289   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.278295   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.279986   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.279997   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.280005   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.280012   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.280019   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.280026   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.280032   20887 round_trippers.go:580]     Audit-Id: 925404fc-2cb7-49af-a90c-3a8502b855dd
	I0728 15:10:45.280039   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.280198   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220728150610-12923","namespace":"kube-system","uid":"ef5d84ce-4249-4af0-b1be-7a3d7f8c2205","resourceVersion":"742","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"164dd1e1cbdc7905e66f2af11f156d06","kubernetes.io/config.mirror":"164dd1e1cbdc7905e66f2af11f156d06","kubernetes.io/config.seen":"2022-07-28T22:06:37.255019449Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernete
s.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i [truncated 4974 chars]
	I0728 15:10:45.280403   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:45.280410   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.280417   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.280425   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.282198   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:45.282209   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.282216   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.282223   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.282230   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.282236   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.282245   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.282253   20887 round_trippers.go:580]     Audit-Id: e65c3f83-62ec-401f-bd44-5cd16b3f1076
	I0728 15:10:45.282292   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:45.282482   20887 pod_ready.go:92] pod "kube-scheduler-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:10:45.282490   20887 pod_ready.go:81] duration metric: took 4.231427ms waiting for pod "kube-scheduler-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.282495   20887 pod_ready.go:38] duration metric: took 13.6007007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:10:45.282506   20887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 15:10:45.289939   20887 command_runner.go:130] > -16
	I0728 15:10:45.289955   20887 ops.go:34] apiserver oom_adj: -16
	I0728 15:10:45.289959   20887 kubeadm.go:630] restartCluster took 24.369528973s
	I0728 15:10:45.289964   20887 kubeadm.go:397] StartCluster complete in 24.406112915s
	I0728 15:10:45.289975   20887 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:10:45.290052   20887 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:10:45.290409   20887 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:10:45.290839   20887 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:10:45.291006   20887 kapi.go:59] client config for multinode-20220728150610-12923: &rest.Config{Host:"https://127.0.0.1:56607", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-2022072815061
0-12923/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:10:45.291190   20887 round_trippers.go:463] GET https://127.0.0.1:56607/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0728 15:10:45.291197   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.291203   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.291209   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.293279   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:45.293289   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.293294   20887 round_trippers.go:580]     Audit-Id: 7bad219c-59cb-48c4-9e4b-0f36d8615f9a
	I0728 15:10:45.293299   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.293304   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.293309   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.293314   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.293318   20887 round_trippers.go:580]     Content-Length: 291
	I0728 15:10:45.293322   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.293380   20887 request.go:1073] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"6b59e24c-c365-439a-855f-a8318765ac15","resourceVersion":"762","creationTimestamp":"2022-07-28T22:06:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0728 15:10:45.293467   20887 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-20220728150610-12923" rescaled to 1
	I0728 15:10:45.293494   20887 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 15:10:45.293511   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 15:10:45.315971   20887 out.go:177] * Verifying Kubernetes components...
	I0728 15:10:45.293525   20887 addons.go:412] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I0728 15:10:45.293687   20887 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:10:45.358116   20887 addons.go:65] Setting storage-provisioner=true in profile "multinode-20220728150610-12923"
	I0728 15:10:45.358118   20887 addons.go:65] Setting default-storageclass=true in profile "multinode-20220728150610-12923"
	I0728 15:10:45.358135   20887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:10:45.358141   20887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-20220728150610-12923"
	I0728 15:10:45.358145   20887 addons.go:153] Setting addon storage-provisioner=true in "multinode-20220728150610-12923"
	W0728 15:10:45.358187   20887 addons.go:162] addon storage-provisioner should already be in state true
	I0728 15:10:45.358246   20887 host.go:66] Checking if "multinode-20220728150610-12923" exists ...
	I0728 15:10:45.358462   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923 --format={{.State.Status}}
	I0728 15:10:45.358589   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923 --format={{.State.Status}}
	I0728 15:10:45.370224   20887 command_runner.go:130] > apiVersion: v1
	I0728 15:10:45.370249   20887 command_runner.go:130] > data:
	I0728 15:10:45.370255   20887 command_runner.go:130] >   Corefile: |
	I0728 15:10:45.370274   20887 command_runner.go:130] >     .:53 {
	I0728 15:10:45.370290   20887 command_runner.go:130] >         errors
	I0728 15:10:45.370300   20887 command_runner.go:130] >         health {
	I0728 15:10:45.370306   20887 command_runner.go:130] >            lameduck 5s
	I0728 15:10:45.370310   20887 command_runner.go:130] >         }
	I0728 15:10:45.370313   20887 command_runner.go:130] >         ready
	I0728 15:10:45.370320   20887 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0728 15:10:45.370325   20887 command_runner.go:130] >            pods insecure
	I0728 15:10:45.370332   20887 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0728 15:10:45.370339   20887 command_runner.go:130] >            ttl 30
	I0728 15:10:45.370343   20887 command_runner.go:130] >         }
	I0728 15:10:45.370346   20887 command_runner.go:130] >         prometheus :9153
	I0728 15:10:45.370349   20887 command_runner.go:130] >         hosts {
	I0728 15:10:45.370353   20887 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I0728 15:10:45.370359   20887 command_runner.go:130] >            fallthrough
	I0728 15:10:45.370362   20887 command_runner.go:130] >         }
	I0728 15:10:45.370367   20887 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0728 15:10:45.370377   20887 command_runner.go:130] >            max_concurrent 1000
	I0728 15:10:45.370381   20887 command_runner.go:130] >         }
	I0728 15:10:45.370384   20887 command_runner.go:130] >         cache 30
	I0728 15:10:45.370387   20887 command_runner.go:130] >         loop
	I0728 15:10:45.370391   20887 command_runner.go:130] >         reload
	I0728 15:10:45.370396   20887 command_runner.go:130] >         loadbalance
	I0728 15:10:45.370401   20887 command_runner.go:130] >     }
	I0728 15:10:45.370426   20887 command_runner.go:130] > kind: ConfigMap
	I0728 15:10:45.370460   20887 command_runner.go:130] > metadata:
	I0728 15:10:45.370473   20887 command_runner.go:130] >   creationTimestamp: "2022-07-28T22:06:37Z"
	I0728 15:10:45.370480   20887 command_runner.go:130] >   name: coredns
	I0728 15:10:45.370488   20887 command_runner.go:130] >   namespace: kube-system
	I0728 15:10:45.370497   20887 command_runner.go:130] >   resourceVersion: "364"
	I0728 15:10:45.370504   20887 command_runner.go:130] >   uid: d879b80e-f5be-4575-a685-39df1fda8448
	I0728 15:10:45.373627   20887 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0728 15:10:45.373706   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:45.430697   20887 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:10:45.430901   20887 kapi.go:59] client config for multinode-20220728150610-12923: &rest.Config{Host:"https://127.0.0.1:56607", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-2022072815061
0-12923/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:10:45.431168   20887 round_trippers.go:463] GET https://127.0.0.1:56607/apis/storage.k8s.io/v1/storageclasses
	I0728 15:10:45.431175   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.452268   20887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:10:45.452276   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.489179   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.489314   20887 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:10:45.489335   20887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 15:10:45.489464   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:45.493204   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:45.493229   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.493248   20887 round_trippers.go:580]     Audit-Id: 5def914f-74ad-47f7-9f51-e60a5e372ba3
	I0728 15:10:45.493255   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.493266   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.493284   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.493290   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.493295   20887 round_trippers.go:580]     Content-Length: 1273
	I0728 15:10:45.493300   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.493344   20887 request.go:1073] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"782"},"items":[{"metadata":{"name":"standard","uid":"e375d519-9f75-4e2d-8e80-c5ab845c65d4","resourceVersion":"376","creationTimestamp":"2022-07-28T22:06:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-07-28T22:06:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0728 15:10:45.493790   20887 request.go:1073] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"e375d519-9f75-4e2d-8e80-c5ab845c65d4","resourceVersion":"376","creationTimestamp":"2022-07-28T22:06:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-07-28T22:06:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0728 15:10:45.493829   20887 round_trippers.go:463] PUT https://127.0.0.1:56607/apis/storage.k8s.io/v1/storageclasses/standard
	I0728 15:10:45.493838   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.493846   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.493855   20887 round_trippers.go:473]     Content-Type: application/json
	I0728 15:10:45.493862   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.497502   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:45.497515   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.497521   20887 round_trippers.go:580]     Audit-Id: 2e1bb0fa-3dce-4d85-85f6-c9fe280d5ae6
	I0728 15:10:45.497525   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.497530   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.497535   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.497539   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.497544   20887 round_trippers.go:580]     Content-Length: 1220
	I0728 15:10:45.497548   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.497564   20887 request.go:1073] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"e375d519-9f75-4e2d-8e80-c5ab845c65d4","resourceVersion":"376","creationTimestamp":"2022-07-28T22:06:51Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-07-28T22:06:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0728 15:10:45.497633   20887 addons.go:153] Setting addon default-storageclass=true in "multinode-20220728150610-12923"
	W0728 15:10:45.497640   20887 addons.go:162] addon default-storageclass should already be in state true
	I0728 15:10:45.497658   20887 host.go:66] Checking if "multinode-20220728150610-12923" exists ...
	I0728 15:10:45.498003   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923 --format={{.State.Status}}
	I0728 15:10:45.498913   20887 node_ready.go:35] waiting up to 6m0s for node "multinode-20220728150610-12923" to be "Ready" ...
	I0728 15:10:45.499491   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:45.499497   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.499503   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.499509   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.502350   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:45.502365   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.502371   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.502382   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.502387   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.502392   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.502397   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.502401   20887 round_trippers.go:580]     Audit-Id: 4ea06635-25b5-497d-91a3-385d0663be03
	I0728 15:10:45.502519   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:45.502783   20887 node_ready.go:49] node "multinode-20220728150610-12923" has status "Ready":"True"
	I0728 15:10:45.502790   20887 node_ready.go:38] duration metric: took 3.845298ms waiting for node "multinode-20220728150610-12923" to be "Ready" ...
	I0728 15:10:45.502795   20887 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:10:45.559770   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:45.566123   20887 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 15:10:45.566138   20887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 15:10:45.566196   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:10:45.631837   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:10:45.652586   20887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:10:45.660380   20887 request.go:533] Waited for 157.530398ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:10:45.660410   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:10:45.660415   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.660422   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.660428   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.664791   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:45.664802   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.664807   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.664813   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.664818   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.664824   20887 round_trippers.go:580]     Audit-Id: c0b5ee39-3463-435c-9a96-277f24f38d5d
	I0728 15:10:45.664830   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.664840   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.666678   20887 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"783"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},
"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".": [truncated 84805 chars]
	I0728 15:10:45.668637   20887 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace to be "Ready" ...
	I0728 15:10:45.725012   20887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 15:10:45.798274   20887 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0728 15:10:45.799901   20887 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0728 15:10:45.801689   20887 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0728 15:10:45.803566   20887 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0728 15:10:45.805189   20887 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0728 15:10:45.838090   20887 command_runner.go:130] > pod/storage-provisioner configured
	I0728 15:10:45.860572   20887 request.go:533] Waited for 191.888282ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:45.860622   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:45.860628   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:45.860636   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:45.860645   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:45.863610   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:45.863627   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:45.863639   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:45 GMT
	I0728 15:10:45.863648   20887 round_trippers.go:580]     Audit-Id: bfffcf4e-af2e-420b-9dbf-97afa6d03e9c
	I0728 15:10:45.863654   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:45.863660   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:45.863665   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:45.863669   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:45.863822   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:45.870769   20887 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0728 15:10:45.898314   20887 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0728 15:10:45.957374   20887 addons.go:414] enableAddons completed in 663.85ms
	I0728 15:10:46.060324   20887 request.go:533] Waited for 196.181154ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:46.060378   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:46.060384   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:46.060392   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:46.060401   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:46.062836   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:46.062849   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:46.062857   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:46.062866   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:46.062874   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:46 GMT
	I0728 15:10:46.062879   20887 round_trippers.go:580]     Audit-Id: 3ec4147b-e6b1-4b59-aeef-9aa522b2872e
	I0728 15:10:46.062885   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:46.062892   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:46.063225   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:46.564349   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:46.564369   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:46.564380   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:46.564390   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:46.571082   20887 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0728 15:10:46.571094   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:46.571100   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:46.571105   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:46.571111   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:46.571125   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:46 GMT
	I0728 15:10:46.571135   20887 round_trippers.go:580]     Audit-Id: 7a2fad29-2f14-4cb1-99d9-dfef1a966a57
	I0728 15:10:46.571145   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:46.571207   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:46.571482   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:46.571488   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:46.571496   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:46.571502   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:46.573831   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:46.573846   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:46.573851   20887 round_trippers.go:580]     Audit-Id: 5445bcb3-91f8-4ab6-a40c-94f0d39e9a33
	I0728 15:10:46.573856   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:46.573861   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:46.573866   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:46.573871   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:46.573877   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:46 GMT
	I0728 15:10:46.573931   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:47.065346   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:47.065367   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:47.065383   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:47.065393   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:47.069343   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:47.069357   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:47.069366   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:47 GMT
	I0728 15:10:47.069373   20887 round_trippers.go:580]     Audit-Id: a7e5dba7-7f15-42e0-af1b-9e55264ca985
	I0728 15:10:47.069379   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:47.069386   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:47.069394   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:47.069400   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:47.069484   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:47.069767   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:47.069774   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:47.069780   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:47.069785   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:47.071647   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:47.071657   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:47.071663   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:47.071669   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:47 GMT
	I0728 15:10:47.071673   20887 round_trippers.go:580]     Audit-Id: 31078036-856a-4028-8273-61b2ffed7e95
	I0728 15:10:47.071678   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:47.071683   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:47.071687   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:47.071852   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:47.564076   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:47.564091   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:47.564097   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:47.564103   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:47.566584   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:47.566595   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:47.566601   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:47 GMT
	I0728 15:10:47.566605   20887 round_trippers.go:580]     Audit-Id: 1fd4aacd-16e9-46fe-806a-ab3f0c873488
	I0728 15:10:47.566611   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:47.566631   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:47.566640   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:47.566645   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:47.566906   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:47.567193   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:47.567202   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:47.567208   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:47.567214   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:47.569143   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:47.569155   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:47.569163   20887 round_trippers.go:580]     Audit-Id: 6512e4f8-4ea8-4051-883a-2873047fdc10
	I0728 15:10:47.569169   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:47.569184   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:47.569192   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:47.569199   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:47.569206   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:47 GMT
	I0728 15:10:47.569249   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:48.063713   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:48.063731   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:48.063743   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:48.063761   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:48.067964   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:48.067986   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:48.068004   20887 round_trippers.go:580]     Audit-Id: 3455f0d3-08c2-400b-9fd5-d9660430f312
	I0728 15:10:48.068012   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:48.068017   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:48.068022   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:48.068027   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:48.068032   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:48 GMT
	I0728 15:10:48.068110   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:48.068382   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:48.068388   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:48.068394   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:48.068403   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:48.070364   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:48.070373   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:48.070378   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:48 GMT
	I0728 15:10:48.070383   20887 round_trippers.go:580]     Audit-Id: 2f4d5e2e-f2d4-49d9-9bcd-037ad103dea9
	I0728 15:10:48.070388   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:48.070393   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:48.070398   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:48.070403   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:48.070473   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:48.070670   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:48.563880   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:48.563901   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:48.563914   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:48.563925   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:48.567874   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:48.567890   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:48.567899   20887 round_trippers.go:580]     Audit-Id: b57b7b52-a223-4496-8e53-ee0e35407746
	I0728 15:10:48.567909   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:48.567917   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:48.567926   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:48.567935   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:48.567943   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:48 GMT
	I0728 15:10:48.568073   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:48.568352   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:48.568358   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:48.568363   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:48.568369   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:48.570207   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:48.570216   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:48.570222   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:48.570227   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:48 GMT
	I0728 15:10:48.570234   20887 round_trippers.go:580]     Audit-Id: 2dc9eb29-d22e-4042-b881-dee32e868611
	I0728 15:10:48.570241   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:48.570246   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:48.570250   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:48.570296   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:49.064380   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:49.064406   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:49.064418   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:49.064428   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:49.068365   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:49.068376   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:49.068381   20887 round_trippers.go:580]     Audit-Id: a7596f52-765d-4659-97c6-fa57e3e89f9d
	I0728 15:10:49.068386   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:49.068391   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:49.068395   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:49.068400   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:49.068405   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:49 GMT
	I0728 15:10:49.068465   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:49.068739   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:49.068745   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:49.068751   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:49.068756   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:49.070847   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:49.070857   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:49.070864   20887 round_trippers.go:580]     Audit-Id: 4e38523a-843d-468d-adac-cef9b55c0d1d
	I0728 15:10:49.070870   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:49.070877   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:49.070882   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:49.070887   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:49.070891   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:49 GMT
	I0728 15:10:49.070936   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:49.563984   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:49.563996   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:49.564003   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:49.564008   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:49.566580   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:49.566594   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:49.566600   20887 round_trippers.go:580]     Audit-Id: 3e599aac-819c-4d73-9062-855be48e69d0
	I0728 15:10:49.566605   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:49.566612   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:49.566618   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:49.566623   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:49.566632   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:49 GMT
	I0728 15:10:49.566794   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:49.567073   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:49.567079   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:49.567085   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:49.567091   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:49.569224   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:49.569236   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:49.569248   20887 round_trippers.go:580]     Audit-Id: 45c3ae35-65b3-476e-baa2-6671328b32dc
	I0728 15:10:49.569257   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:49.569266   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:49.569274   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:49.569280   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:49.569287   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:49 GMT
	I0728 15:10:49.569348   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:50.064029   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:50.064057   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:50.064071   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:50.064162   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:50.067702   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:50.067712   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:50.067718   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:50.067723   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:50.067728   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:50 GMT
	I0728 15:10:50.067733   20887 round_trippers.go:580]     Audit-Id: 891cf6a1-55cb-4f7e-a4f8-efba3dcf6b5f
	I0728 15:10:50.067738   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:50.067742   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:50.067794   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:50.068065   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:50.068072   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:50.068084   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:50.068092   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:50.069910   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:50.069921   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:50.069927   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:50 GMT
	I0728 15:10:50.069933   20887 round_trippers.go:580]     Audit-Id: 55185a66-c9d1-44ca-a4bd-225f030b4524
	I0728 15:10:50.069940   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:50.069946   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:50.069952   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:50.069957   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:50.070052   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:50.564469   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:50.564488   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:50.564501   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:50.564511   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:50.568659   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:50.568674   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:50.568681   20887 round_trippers.go:580]     Audit-Id: da1a6ce0-104e-45a1-9cbd-7e060414ad8a
	I0728 15:10:50.568687   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:50.568694   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:50.568706   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:50.568715   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:50.568721   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:50 GMT
	I0728 15:10:50.568809   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:50.569182   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:50.569189   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:50.569194   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:50.569199   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:50.570966   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:50.570976   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:50.570981   20887 round_trippers.go:580]     Audit-Id: 63ce3d7c-2f92-46dc-b367-8905f3940def
	I0728 15:10:50.570986   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:50.570991   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:50.570996   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:50.571001   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:50.571005   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:50 GMT
	I0728 15:10:50.571377   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:50.571562   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:51.064328   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:51.064348   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:51.064360   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:51.064369   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:51.068420   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:51.068436   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:51.068446   20887 round_trippers.go:580]     Audit-Id: 376401c2-0e64-4a8e-b773-e5d947d26d1e
	I0728 15:10:51.068455   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:51.068462   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:51.068469   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:51.068476   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:51.068481   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:51 GMT
	I0728 15:10:51.068607   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:51.068880   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:51.068886   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:51.068892   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:51.068899   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:51.070717   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:51.070726   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:51.070732   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:51.070737   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:51.070746   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:51.070753   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:51 GMT
	I0728 15:10:51.070758   20887 round_trippers.go:580]     Audit-Id: 2a85c656-0db1-4391-add5-e0df95f75da1
	I0728 15:10:51.070763   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:51.070805   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:51.563721   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:51.563741   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:51.563759   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:51.563792   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:51.567554   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:51.567565   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:51.567571   20887 round_trippers.go:580]     Audit-Id: ff14b7d9-0633-4334-ad34-72ce978a4e47
	I0728 15:10:51.567576   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:51.567580   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:51.567585   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:51.567589   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:51.567594   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:51 GMT
	I0728 15:10:51.567669   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:51.567949   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:51.567957   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:51.567963   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:51.567969   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:51.570006   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:51.570015   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:51.570020   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:51.570026   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:51.570031   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:51.570035   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:51 GMT
	I0728 15:10:51.570040   20887 round_trippers.go:580]     Audit-Id: 96d6f515-0eeb-4939-8608-f5f7b3b59615
	I0728 15:10:51.570044   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:51.570354   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:52.065087   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:52.065103   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:52.065111   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:52.065119   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:52.068025   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:52.068038   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:52.068044   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:52 GMT
	I0728 15:10:52.068049   20887 round_trippers.go:580]     Audit-Id: de398890-4918-4b8b-aeb3-fbf181c4aefb
	I0728 15:10:52.068053   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:52.068058   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:52.068066   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:52.068073   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:52.068136   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:52.068453   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:52.068463   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:52.068469   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:52.068475   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:52.070405   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:52.070417   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:52.070425   20887 round_trippers.go:580]     Audit-Id: 091a8c8e-a6ad-40c7-b9dd-5e1743ebbc57
	I0728 15:10:52.070433   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:52.070453   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:52.070459   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:52.070464   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:52.070469   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:52 GMT
	I0728 15:10:52.070594   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:52.563608   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:52.563637   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:52.563649   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:52.563660   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:52.567598   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:52.567612   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:52.567620   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:52.567626   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:52.567634   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:52.567649   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:52 GMT
	I0728 15:10:52.567659   20887 round_trippers.go:580]     Audit-Id: 98fd0d8c-2f09-49b5-8822-f7ee95be0679
	I0728 15:10:52.567667   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:52.567735   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:52.568089   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:52.568096   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:52.568102   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:52.568107   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:52.570000   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:52.570009   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:52.570015   20887 round_trippers.go:580]     Audit-Id: 6b43ddcc-cc23-4644-8813-2f96ac7db4c5
	I0728 15:10:52.570019   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:52.570024   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:52.570028   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:52.570033   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:52.570038   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:52 GMT
	I0728 15:10:52.570081   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:53.063594   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:53.063610   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:53.063619   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:53.063625   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:53.066652   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:53.066663   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:53.066669   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:53 GMT
	I0728 15:10:53.066674   20887 round_trippers.go:580]     Audit-Id: 696bd6bf-bcaa-4c1c-9050-7a5f8403d797
	I0728 15:10:53.066678   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:53.066683   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:53.066693   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:53.066699   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:53.066764   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:53.067050   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:53.067060   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:53.067066   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:53.067071   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:53.069403   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:53.069414   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:53.069425   20887 round_trippers.go:580]     Audit-Id: c0814ef9-ca94-45aa-a1d4-fee9edcf1cc8
	I0728 15:10:53.069431   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:53.069436   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:53.069440   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:53.069445   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:53.069450   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:53 GMT
	I0728 15:10:53.069499   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:53.069700   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:53.564180   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:53.564195   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:53.564204   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:53.564211   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:53.566995   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:53.567005   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:53.567010   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:53.567015   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:53.567019   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:53.567030   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:53.567035   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:53 GMT
	I0728 15:10:53.567041   20887 round_trippers.go:580]     Audit-Id: 182cf1a6-0466-49c9-881b-b3f5c7e8fc03
	I0728 15:10:53.567095   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:53.567367   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:53.567373   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:53.567379   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:53.567384   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:53.569437   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:53.569448   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:53.569455   20887 round_trippers.go:580]     Audit-Id: 1ea318d5-100c-41bc-825b-7d5fea38ac96
	I0728 15:10:53.569464   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:53.569471   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:53.569478   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:53.569509   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:53.569515   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:53 GMT
	I0728 15:10:53.569569   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:54.063677   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:54.063698   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:54.063709   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:54.063719   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:54.068113   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:54.068125   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:54.068130   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:54.068136   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:54 GMT
	I0728 15:10:54.068140   20887 round_trippers.go:580]     Audit-Id: 1a4914f4-45c7-4f15-a330-41eb588be503
	I0728 15:10:54.068145   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:54.068150   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:54.068154   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:54.068207   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:54.068489   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:54.068496   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:54.068501   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:54.068506   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:54.070725   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:54.070738   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:54.070743   20887 round_trippers.go:580]     Audit-Id: b15acedf-205b-4868-b22a-185cfedda8cb
	I0728 15:10:54.070748   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:54.070773   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:54.070799   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:54.070821   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:54.070830   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:54 GMT
	I0728 15:10:54.071185   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:54.564629   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:54.564642   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:54.564648   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:54.564653   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:54.567002   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:54.567012   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:54.567019   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:54 GMT
	I0728 15:10:54.567024   20887 round_trippers.go:580]     Audit-Id: 57164ac5-9d40-4df5-adb7-cd60fff62568
	I0728 15:10:54.567029   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:54.567033   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:54.567040   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:54.567051   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:54.567326   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:54.567607   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:54.567615   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:54.567624   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:54.567629   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:54.569904   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:54.569914   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:54.569920   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:54.569925   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:54.569930   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:54 GMT
	I0728 15:10:54.569934   20887 round_trippers.go:580]     Audit-Id: 244e7023-ed99-44fd-816a-6cfc58455073
	I0728 15:10:54.569939   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:54.569945   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:54.569988   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:55.064265   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:55.064290   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:55.064303   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:55.064312   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:55.069220   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:55.069230   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:55.069239   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:55.069244   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:55.069249   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:55 GMT
	I0728 15:10:55.069253   20887 round_trippers.go:580]     Audit-Id: a9e475b4-43a6-40cb-a7ea-65a0cbb15f52
	I0728 15:10:55.069258   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:55.069263   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:55.069316   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:55.069589   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:55.069595   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:55.069602   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:55.069607   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:55.071993   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:55.072003   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:55.072011   20887 round_trippers.go:580]     Audit-Id: 8954296d-0f98-4d41-bd10-71e4c1d38303
	I0728 15:10:55.072017   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:55.072021   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:55.072026   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:55.072030   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:55.072035   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:55 GMT
	I0728 15:10:55.072206   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:55.072398   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:55.563601   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:55.563613   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:55.563620   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:55.563625   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:55.566313   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:55.566324   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:55.566329   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:55.566334   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:55.566338   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:55.566347   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:55.566352   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:55 GMT
	I0728 15:10:55.566356   20887 round_trippers.go:580]     Audit-Id: f00291f9-0e26-49bf-a235-bb706ec59210
	I0728 15:10:55.566469   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:55.566760   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:55.566766   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:55.566773   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:55.566778   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:55.568951   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:55.568962   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:55.568973   20887 round_trippers.go:580]     Audit-Id: e4706492-54b0-429b-8aeb-c3ca5504a6f8
	I0728 15:10:55.568984   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:55.568991   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:55.568997   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:55.569005   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:55.569011   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:55 GMT
	I0728 15:10:55.569063   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:56.063706   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:56.063727   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:56.063738   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:56.063749   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:56.067741   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:56.067763   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:56.067772   20887 round_trippers.go:580]     Audit-Id: 9cd602cb-19ec-420e-946c-b3ab0ecd4546
	I0728 15:10:56.067780   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:56.067789   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:56.067797   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:56.067803   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:56.067812   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:56 GMT
	I0728 15:10:56.068084   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:56.068365   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:56.068371   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:56.068379   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:56.068384   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:56.070398   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:56.070407   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:56.070412   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:56.070417   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:56.070422   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:56 GMT
	I0728 15:10:56.070426   20887 round_trippers.go:580]     Audit-Id: 59bc3c86-d0f7-4f43-ab1f-8be270319127
	I0728 15:10:56.070431   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:56.070436   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:56.070478   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:56.564318   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:56.564333   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:56.564342   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:56.564349   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:56.567635   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:56.567645   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:56.567650   20887 round_trippers.go:580]     Audit-Id: 1e6cc6ba-0a60-4ebc-845e-c11539317e3b
	I0728 15:10:56.567656   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:56.567661   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:56.567665   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:56.567670   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:56.567674   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:56 GMT
	I0728 15:10:56.567799   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:56.568065   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:56.568072   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:56.568078   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:56.568083   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:56.570006   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:56.570015   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:56.570021   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:56.570026   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:56.570033   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:56.570039   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:56 GMT
	I0728 15:10:56.570044   20887 round_trippers.go:580]     Audit-Id: 63e2158d-0c0c-4a2f-9059-aeb4a66c78dd
	I0728 15:10:56.570075   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:56.570331   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:57.063574   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:57.063590   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:57.063599   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:57.063606   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:57.067040   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:57.067056   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:57.067065   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:57.067070   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:57.067075   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:57 GMT
	I0728 15:10:57.067079   20887 round_trippers.go:580]     Audit-Id: 5e15ec86-3a98-4ac6-8fe6-c1ce9daa3078
	I0728 15:10:57.067086   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:57.067091   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:57.067147   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:57.067426   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:57.067432   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:57.067438   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:57.067443   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:57.069502   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:57.069515   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:57.069524   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:57.069535   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:57.069544   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:57.069549   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:57.069559   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:57 GMT
	I0728 15:10:57.069566   20887 round_trippers.go:580]     Audit-Id: f6a7388d-204b-463c-adfe-5bc71b031143
	I0728 15:10:57.069826   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:57.563802   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:57.563817   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:57.563826   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:57.563833   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:57.566788   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:57.566799   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:57.566805   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:57.566810   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:57 GMT
	I0728 15:10:57.566815   20887 round_trippers.go:580]     Audit-Id: 58a2162e-4e9e-4d0b-92ff-1c0efe90a984
	I0728 15:10:57.566819   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:57.566825   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:57.566830   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:57.566886   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:57.567161   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:57.567167   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:57.567173   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:57.567178   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:57.569129   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:57.569138   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:57.569144   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:57.569151   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:57.569161   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:57.569172   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:57 GMT
	I0728 15:10:57.569181   20887 round_trippers.go:580]     Audit-Id: 2556eec1-d60a-4386-a02b-a2d1c7d72944
	I0728 15:10:57.569198   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:57.569390   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:57.569588   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:10:58.063656   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:58.063677   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:58.063692   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:58.063702   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:58.067007   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:58.067018   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:58.067026   20887 round_trippers.go:580]     Audit-Id: e90d48dc-b18b-42c3-bf96-c942f39fb014
	I0728 15:10:58.067032   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:58.067037   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:58.067041   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:58.067046   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:58.067051   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:58 GMT
	I0728 15:10:58.067324   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:58.067606   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:58.067612   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:58.067618   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:58.067623   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:58.069541   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:58.069549   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:58.069555   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:58 GMT
	I0728 15:10:58.069561   20887 round_trippers.go:580]     Audit-Id: 69b991a0-0e32-4e72-b32a-b4dc8808896d
	I0728 15:10:58.069567   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:58.069571   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:58.069576   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:58.069580   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:58.069621   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:58.564048   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:58.564066   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:58.564078   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:58.564088   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:58.568006   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:58.568019   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:58.568030   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:58.568038   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:58 GMT
	I0728 15:10:58.568044   20887 round_trippers.go:580]     Audit-Id: c726d51a-6c18-4ffb-846d-3189bb43ff03
	I0728 15:10:58.568052   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:58.568058   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:58.568064   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:58.568131   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:58.568502   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:58.568511   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:58.568519   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:58.568526   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:58.570526   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:10:58.570535   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:58.570541   20887 round_trippers.go:580]     Audit-Id: 8f2fd8b4-67fd-43e2-98ac-0bd66c8c473d
	I0728 15:10:58.570546   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:58.570551   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:58.570555   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:58.570563   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:58.570568   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:58 GMT
	I0728 15:10:58.570608   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:59.064586   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:59.064608   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:59.064620   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:59.064630   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:59.068812   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:59.068824   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:59.068851   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:59 GMT
	I0728 15:10:59.068856   20887 round_trippers.go:580]     Audit-Id: 54a273ad-00fa-43bd-b652-d36c772033a6
	I0728 15:10:59.068874   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:59.068882   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:59.068887   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:59.068893   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:59.068992   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:59.069308   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:59.069316   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:59.069322   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:59.069327   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:59.071878   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:10:59.071887   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:59.071892   20887 round_trippers.go:580]     Audit-Id: 48457703-3a07-4508-bac0-6f42f81cf1c9
	I0728 15:10:59.071899   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:59.071904   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:59.071908   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:59.071913   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:59.071918   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:59 GMT
	I0728 15:10:59.071956   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:59.563500   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:10:59.563525   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:59.563538   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:59.563548   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:59.567358   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:10:59.567371   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:59.567377   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:59.567382   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:59.567391   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:59.567396   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:59.567401   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:59 GMT
	I0728 15:10:59.567405   20887 round_trippers.go:580]     Audit-Id: d671048b-df86-467d-a9f5-ab4b2b366f5b
	I0728 15:10:59.567460   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:10:59.567774   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:10:59.567781   20887 round_trippers.go:469] Request Headers:
	I0728 15:10:59.567787   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:10:59.567791   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:10:59.571832   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:10:59.571843   20887 round_trippers.go:577] Response Headers:
	I0728 15:10:59.571855   20887 round_trippers.go:580]     Audit-Id: be0ca3d0-a09c-4de2-adda-63994a4845fe
	I0728 15:10:59.571864   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:10:59.571871   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:10:59.571879   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:10:59.571886   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:10:59.571893   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:10:59 GMT
	I0728 15:10:59.572213   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:10:59.572398   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:11:00.063722   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:00.063742   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:00.063755   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:00.063764   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:00.068021   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:00.068033   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:00.068044   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:00.068050   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:00.068056   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:00 GMT
	I0728 15:11:00.068063   20887 round_trippers.go:580]     Audit-Id: 61cc8825-38ee-4eea-8d17-a091365286c2
	I0728 15:11:00.068070   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:00.068075   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:00.068143   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:00.068411   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:00.068417   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:00.068423   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:00.068428   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:00.070157   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:00.070168   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:00.070174   20887 round_trippers.go:580]     Audit-Id: 3c64c012-16ac-4cb0-80bd-eda6ee2a0c91
	I0728 15:11:00.070180   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:00.070187   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:00.070194   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:00.070199   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:00.070203   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:00 GMT
	I0728 15:11:00.070261   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:00.564005   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:00.564019   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:00.564029   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:00.564036   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:00.567328   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:00.567339   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:00.567344   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:00 GMT
	I0728 15:11:00.567349   20887 round_trippers.go:580]     Audit-Id: bef46ae1-ebb1-48e2-80df-87832a8dd3df
	I0728 15:11:00.567353   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:00.567361   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:00.567366   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:00.567370   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:00.567481   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:00.567747   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:00.567753   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:00.567759   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:00.567764   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:00.569567   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:00.569581   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:00.569588   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:00.569595   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:00.569601   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:00 GMT
	I0728 15:11:00.569605   20887 round_trippers.go:580]     Audit-Id: 462e3cc5-9e8c-4d9a-891a-a5af21ed80c5
	I0728 15:11:00.569609   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:00.569619   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:00.569668   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:01.063829   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:01.063856   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:01.063869   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:01.063880   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:01.067950   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:01.067966   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:01.067973   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:01.067980   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:01.067987   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:01.067993   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:01.068004   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:01 GMT
	I0728 15:11:01.068011   20887 round_trippers.go:580]     Audit-Id: 3c30da17-24e9-44f8-9e77-98f6a9bc17ab
	I0728 15:11:01.068096   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:01.068372   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:01.068380   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:01.068386   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:01.068393   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:01.070302   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:01.070311   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:01.070316   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:01.070321   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:01.070326   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:01.070330   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:01.070335   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:01 GMT
	I0728 15:11:01.070340   20887 round_trippers.go:580]     Audit-Id: b998cadf-5c8f-4db9-bfb2-70f39938ccdc
	I0728 15:11:01.070388   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:01.563562   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:01.563577   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:01.563586   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:01.563593   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:01.566687   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:01.566700   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:01.566706   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:01.566711   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:01.566716   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:01.566722   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:01 GMT
	I0728 15:11:01.566726   20887 round_trippers.go:580]     Audit-Id: 7d932a15-7127-41a7-b3fc-9f93359013eb
	I0728 15:11:01.566731   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:01.566899   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:01.567191   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:01.567197   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:01.567203   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:01.567208   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:01.569126   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:01.569136   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:01.569142   20887 round_trippers.go:580]     Audit-Id: 5ccc4432-5074-4a6b-b64f-62fc964c2a72
	I0728 15:11:01.569151   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:01.569157   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:01.569163   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:01.569168   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:01.569173   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:01 GMT
	I0728 15:11:01.569224   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:02.065549   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:02.065571   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:02.065583   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:02.065593   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:02.069164   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:02.069174   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:02.069180   20887 round_trippers.go:580]     Audit-Id: 0a2dcc2c-53c6-47f6-9847-31d585b9b6b8
	I0728 15:11:02.069184   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:02.069189   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:02.069194   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:02.069199   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:02.069203   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:02 GMT
	I0728 15:11:02.069253   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:02.069542   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:02.069548   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:02.069554   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:02.069559   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:02.071618   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:02.071627   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:02.071633   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:02.071637   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:02 GMT
	I0728 15:11:02.071642   20887 round_trippers.go:580]     Audit-Id: 26ad4969-b63d-4db6-84d0-7646fb99ee51
	I0728 15:11:02.071647   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:02.071652   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:02.071656   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:02.071696   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:02.071897   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:11:02.564759   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:02.564775   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:02.564783   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:02.564794   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:02.567870   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:02.567883   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:02.567891   20887 round_trippers.go:580]     Audit-Id: ba5c0f67-b6b7-44f2-abd6-5d2f9d32543c
	I0728 15:11:02.567897   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:02.567904   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:02.567908   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:02.567913   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:02.567918   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:02 GMT
	I0728 15:11:02.568114   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:02.568399   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:02.568406   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:02.568412   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:02.568418   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:02.570398   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:02.570407   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:02.570412   20887 round_trippers.go:580]     Audit-Id: 040c8970-aea2-42e1-a245-ed3bd8addaca
	I0728 15:11:02.570419   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:02.570428   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:02.570443   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:02.570452   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:02.570457   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:02 GMT
	I0728 15:11:02.570658   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:03.064192   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:03.064213   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:03.064225   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:03.064234   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:03.068874   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:03.068888   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:03.068894   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:03.068900   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:03 GMT
	I0728 15:11:03.068904   20887 round_trippers.go:580]     Audit-Id: f9ea7d9d-892a-438f-87a8-28c0f2936263
	I0728 15:11:03.068909   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:03.068914   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:03.068919   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:03.068972   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:03.069259   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:03.069266   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:03.069272   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:03.069277   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:03.071236   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:03.071245   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:03.071253   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:03.071259   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:03.071271   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:03.071276   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:03 GMT
	I0728 15:11:03.071281   20887 round_trippers.go:580]     Audit-Id: 9fa802be-2768-4cac-a516-c7ef1678838c
	I0728 15:11:03.071285   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:03.071326   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:03.563632   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:03.563653   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:03.563665   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:03.563675   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:03.567428   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:03.567440   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:03.567445   20887 round_trippers.go:580]     Audit-Id: e09ce4d5-51dd-4288-9ee0-745a371b807e
	I0728 15:11:03.567450   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:03.567455   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:03.567459   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:03.567464   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:03.567468   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:03 GMT
	I0728 15:11:03.567525   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:03.567796   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:03.567803   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:03.567808   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:03.567813   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:03.569689   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:03.569699   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:03.569705   20887 round_trippers.go:580]     Audit-Id: 67bb063a-3b65-4f8f-b88f-f32ce3ef38fe
	I0728 15:11:03.569710   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:03.569715   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:03.569719   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:03.569746   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:03.569767   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:03 GMT
	I0728 15:11:03.570056   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:04.065237   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:04.065262   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:04.065274   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:04.065329   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:04.069390   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:04.069405   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:04.069413   20887 round_trippers.go:580]     Audit-Id: a52bf9d7-c41b-4053-acfe-fc95b81c040f
	I0728 15:11:04.069420   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:04.069427   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:04.069433   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:04.069440   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:04.069446   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:04 GMT
	I0728 15:11:04.069531   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:04.069867   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:04.069873   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:04.069879   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:04.069884   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:04.072055   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:04.072064   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:04.072070   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:04.072085   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:04.072093   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:04.072098   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:04 GMT
	I0728 15:11:04.072103   20887 round_trippers.go:580]     Audit-Id: ae8e301e-7248-44d0-a7d3-800f1001d802
	I0728 15:11:04.072109   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:04.072159   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:04.072348   20887 pod_ready.go:102] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"False"
	I0728 15:11:04.563509   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:04.563536   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:04.563587   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:04.563599   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:04.567314   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:04.567326   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:04.567332   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:04.567336   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:04 GMT
	I0728 15:11:04.567340   20887 round_trippers.go:580]     Audit-Id: 3647f750-0959-4be7-8fae-e57918b64a2a
	I0728 15:11:04.567345   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:04.567350   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:04.567354   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:04.567407   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:04.567687   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:04.567693   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:04.567699   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:04.567704   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:04.569743   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:04.569753   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:04.569758   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:04 GMT
	I0728 15:11:04.569763   20887 round_trippers.go:580]     Audit-Id: 995f55b3-2054-4486-8c9d-aa2e935ef09a
	I0728 15:11:04.569767   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:04.569772   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:04.569777   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:04.569781   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:04.569898   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:05.065003   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:05.065028   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:05.065039   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:05.065049   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:05.069735   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:05.069748   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:05.069755   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:05.069759   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:05.069764   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:05 GMT
	I0728 15:11:05.069769   20887 round_trippers.go:580]     Audit-Id: 7a444741-4d37-4095-bc47-b861813b8cd1
	I0728 15:11:05.069773   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:05.069778   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:05.069834   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:05.070116   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:05.070124   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:05.070130   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:05.070135   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:05.072231   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:05.072250   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:05.072276   20887 round_trippers.go:580]     Audit-Id: 15e863d8-854e-4c8b-969b-2a1156135b26
	I0728 15:11:05.072284   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:05.072289   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:05.072299   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:05.072305   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:05.072309   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:05 GMT
	I0728 15:11:05.072440   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:05.563448   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:05.563461   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:05.563468   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:05.563473   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:05.566374   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:05.566386   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:05.566392   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:05 GMT
	I0728 15:11:05.566397   20887 round_trippers.go:580]     Audit-Id: 202148f5-099b-4e75-be1d-c99656e5be90
	I0728 15:11:05.566402   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:05.566406   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:05.566412   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:05.566417   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:05.566489   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:05.566768   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:05.566774   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:05.566780   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:05.566785   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:05.568956   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:05.568967   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:05.568974   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:05.568981   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:05.568992   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:05 GMT
	I0728 15:11:05.569007   20887 round_trippers.go:580]     Audit-Id: 79e8b229-9b93-4109-9c97-ea2827cb22e8
	I0728 15:11:05.569024   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:05.569030   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:05.569079   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.063520   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:06.063542   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.063555   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.063566   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.067292   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:06.067303   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.067311   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.067318   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.067325   20887 round_trippers.go:580]     Audit-Id: 3c84ee11-5706-4724-8d6b-603095cb35d2
	I0728 15:11:06.067331   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.067352   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.067360   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.067590   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"703","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6418 chars]
	I0728 15:11:06.067870   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.067877   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.067883   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.067888   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.069924   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:06.069933   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.069938   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.069943   20887 round_trippers.go:580]     Audit-Id: 224153b4-5234-475d-98d6-dd85b9183abb
	I0728 15:11:06.069950   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.069954   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.069959   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.069963   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.070016   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.563645   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-dfxk7
	I0728 15:11:06.563666   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.563678   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.563689   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.567856   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:06.567872   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.567880   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.567886   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.567895   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.567904   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.567910   20887 round_trippers.go:580]     Audit-Id: 946626db-6b5a-4298-aff0-1a358e9eefd6
	I0728 15:11:06.567917   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.567993   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"797","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:
livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:p [truncated 6189 chars]
	I0728 15:11:06.568310   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.568316   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.568321   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.568327   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.570213   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.570222   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.570228   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.570235   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.570246   20887 round_trippers.go:580]     Audit-Id: 2ff213e9-ff02-4c3b-90ba-15b56d031bcf
	I0728 15:11:06.570258   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.570271   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.570280   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.570510   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.570691   20887 pod_ready.go:92] pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:06.570701   20887 pod_ready.go:81] duration metric: took 20.902231055s waiting for pod "coredns-6d4b75cb6d-dfxk7" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.570707   20887 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.570733   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/etcd-multinode-20220728150610-12923
	I0728 15:11:06.570737   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.570743   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.570748   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.572702   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.572710   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.572715   20887 round_trippers.go:580]     Audit-Id: 5837a8d2-e495-4e85-9654-85a42ae50c5d
	I0728 15:11:06.572724   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.572731   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.572737   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.572742   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.572746   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.573005   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-20220728150610-12923","namespace":"kube-system","uid":"2d4683e7-2e93-41c5-af51-5181a7c29edd","resourceVersion":"730","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"86483540902094820b1a0cc29f31f22f","kubernetes.io/config.mirror":"86483540902094820b1a0cc29f31f22f","kubernetes.io/config.seen":"2022-07-28T22:06:37.255020292Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fi
eldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io [truncated 6050 chars]
	I0728 15:11:06.573258   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.573265   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.573273   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.573280   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.575044   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.575052   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.575058   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.575063   20887 round_trippers.go:580]     Audit-Id: 42358bb8-3946-447e-a113-b72b4d2be218
	I0728 15:11:06.575067   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.575072   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.575076   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.575081   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.575127   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.575336   20887 pod_ready.go:92] pod "etcd-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:06.575343   20887 pod_ready.go:81] duration metric: took 4.630782ms waiting for pod "etcd-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.575356   20887 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.575384   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-20220728150610-12923
	I0728 15:11:06.575388   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.575394   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.575399   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.577088   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.577096   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.577102   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.577106   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.577111   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.577116   20887 round_trippers.go:580]     Audit-Id: b3ba3005-c591-4241-bf4b-47ec3a215d2d
	I0728 15:11:06.577121   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.577125   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.577190   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-20220728150610-12923","namespace":"kube-system","uid":"34425f5f-5cbc-4e7c-89b3-e4758c44f162","resourceVersion":"727","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"f17cd4a02884221436b424aa5c9008ee","kubernetes.io/config.mirror":"f17cd4a02884221436b424aa5c9008ee","kubernetes.io/config.seen":"2022-07-28T22:06:37.255021189Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z",
"fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".": [truncated 8517 chars]
	I0728 15:11:06.577440   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.577446   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.577452   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.577457   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.578986   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.578997   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.579003   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.579009   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.579013   20887 round_trippers.go:580]     Audit-Id: 3307727b-5d95-4468-b7d8-8329583149bb
	I0728 15:11:06.579018   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.579022   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.579049   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.579271   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.579439   20887 pod_ready.go:92] pod "kube-apiserver-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:06.579444   20887 pod_ready.go:81] duration metric: took 4.082874ms waiting for pod "kube-apiserver-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.579450   20887 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.579472   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-20220728150610-12923
	I0728 15:11:06.579476   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.579481   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.579486   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.581318   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.581327   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.581332   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.581343   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.581349   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.581354   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.581359   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.581364   20887 round_trippers.go:580]     Audit-Id: 0ca7ffc2-ef2c-4cc7-b3ce-2f71c934b530
	I0728 15:11:06.581672   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-20220728150610-12923","namespace":"kube-system","uid":"92841ab9-f773-435e-a133-794e0d8e0cef","resourceVersion":"778","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.mirror":"f06b24eea4440e525cff49c1e9794974","kubernetes.io/config.seen":"2022-07-28T22:06:37.255007827Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"
.":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror": [truncated 8090 chars]
	I0728 15:11:06.581935   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.581942   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.581948   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.581953   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.583745   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.583754   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.583759   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.583764   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.583769   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.583774   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.583779   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.583790   20887 round_trippers.go:580]     Audit-Id: 09adf87b-2d67-4e01-a725-d038f8d9ee1d
	I0728 15:11:06.583836   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.584018   20887 pod_ready.go:92] pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:06.584024   20887 pod_ready.go:81] duration metric: took 4.570056ms waiting for pod "kube-controller-manager-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.584029   20887 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bxdk6" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.584049   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-bxdk6
	I0728 15:11:06.584053   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.584059   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.584064   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.585706   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.585714   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.585719   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.585724   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.585729   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.585733   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.585737   20887 round_trippers.go:580]     Audit-Id: 31cfff38-39b4-41b8-8384-f4329b95e87f
	I0728 15:11:06.585742   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.585786   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bxdk6","generateName":"kube-proxy-","namespace":"kube-system","uid":"befca8fa-aef6-415a-b033-8522067db320","resourceVersion":"474","creationTimestamp":"2022-07-28T22:07:45Z","labels":{"controller-revision-hash":"94985b49","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:07:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5548 chars]
	I0728 15:11:06.586022   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923-m02
	I0728 15:11:06.586028   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.586034   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.586039   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.587472   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:06.587481   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.587486   20887 round_trippers.go:580]     Audit-Id: 19724d84-538d-4c14-aeb8-b2098d890ee9
	I0728 15:11:06.587490   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.587494   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.587499   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.587503   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.587508   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.587661   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923-m02","uid":"2d4cf78c-ed12-4ef8-9967-e85ae2ffd232","resourceVersion":"556","creationTimestamp":"2022-07-28T22:07:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:07:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:07:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4523 chars]
	I0728 15:11:06.587815   20887 pod_ready.go:92] pod "kube-proxy-bxdk6" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:06.587821   20887 pod_ready.go:81] duration metric: took 3.78832ms waiting for pod "kube-proxy-bxdk6" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.587826   20887 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cdz7z" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.765733   20887 request.go:533] Waited for 177.863354ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-cdz7z
	I0728 15:11:06.765828   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-cdz7z
	I0728 15:11:06.765835   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.765935   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.765947   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.770071   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:06.770089   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.770100   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.770112   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.770125   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.770142   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.770153   20887 round_trippers.go:580]     Audit-Id: fc0edabf-73f5-416f-9d1e-c8a53efe45d1
	I0728 15:11:06.770162   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.770252   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cdz7z","generateName":"kube-proxy-","namespace":"kube-system","uid":"f9727653-ed51-43f3-95ad-fd2f5fb0ac6e","resourceVersion":"704","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"controller-revision-hash":"94985b49","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5747 chars]
	I0728 15:11:06.963742   20887 request.go:533] Waited for 193.14489ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.963807   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:06.963816   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:06.963827   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:06.963840   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:06.967895   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:06.967910   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:06.967917   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:06.967944   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:06.967954   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:06.967961   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:06 GMT
	I0728 15:11:06.967967   20887 round_trippers.go:580]     Audit-Id: 5a0598f5-f32a-42b3-b08f-2a44ad156d4f
	I0728 15:11:06.967973   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:06.968037   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:06.968331   20887 pod_ready.go:92] pod "kube-proxy-cdz7z" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:06.968338   20887 pod_ready.go:81] duration metric: took 380.511561ms waiting for pod "kube-proxy-cdz7z" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:06.968344   20887 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cn9x2" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:07.163824   20887 request.go:533] Waited for 195.350893ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-cn9x2
	I0728 15:11:07.163881   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-proxy-cn9x2
	I0728 15:11:07.163889   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:07.163901   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:07.163912   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:07.168887   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:07.168898   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:07.168904   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:07.168914   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:07.168920   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:07 GMT
	I0728 15:11:07.168924   20887 round_trippers.go:580]     Audit-Id: 9c91f87b-70d9-45f6-8683-984c661379d0
	I0728 15:11:07.168929   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:07.168933   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:07.170114   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cn9x2","generateName":"kube-proxy-","namespace":"kube-system","uid":"813dc8a0-2ea3-4ee9-83ce-fe09ccf38295","resourceVersion":"671","creationTimestamp":"2022-07-28T22:08:39Z","labels":{"controller-revision-hash":"94985b49","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:08:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6c17ae9-2dfb-4593-a5d5-4ecf9e4caa69\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5755 chars]
	I0728 15:11:07.364100   20887 request.go:533] Waited for 193.661665ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923-m03
	I0728 15:11:07.364152   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923-m03
	I0728 15:11:07.364160   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:07.364257   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:07.364271   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:07.368521   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:07.368536   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:07.368544   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:07 GMT
	I0728 15:11:07.368550   20887 round_trippers.go:580]     Audit-Id: 76694b8b-8a83-4975-82a5-3519e8d5a51f
	I0728 15:11:07.368561   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:07.368584   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:07.368595   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:07.368601   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:07.368867   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923-m03","uid":"705fe4c5-d194-48b6-83d4-926ad5fead86","resourceVersion":"686","creationTimestamp":"2022-07-28T22:09:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:09:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:09:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 4340 chars]
	I0728 15:11:07.369120   20887 pod_ready.go:92] pod "kube-proxy-cn9x2" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:07.369126   20887 pod_ready.go:81] duration metric: took 400.782058ms waiting for pod "kube-proxy-cn9x2" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:07.369132   20887 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:07.565324   20887 request.go:533] Waited for 196.139418ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220728150610-12923
	I0728 15:11:07.565387   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-20220728150610-12923
	I0728 15:11:07.565401   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:07.565415   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:07.565426   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:07.569519   20887 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0728 15:11:07.569533   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:07.569540   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:07.569547   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:07.569554   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:07 GMT
	I0728 15:11:07.569560   20887 round_trippers.go:580]     Audit-Id: ed5b44d1-23a1-464f-b4e2-89d12aa4333d
	I0728 15:11:07.569567   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:07.569578   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:07.569646   20887 request.go:1073] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-20220728150610-12923","namespace":"kube-system","uid":"ef5d84ce-4249-4af0-b1be-7a3d7f8c2205","resourceVersion":"742","creationTimestamp":"2022-07-28T22:06:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"164dd1e1cbdc7905e66f2af11f156d06","kubernetes.io/config.mirror":"164dd1e1cbdc7905e66f2af11f156d06","kubernetes.io/config.seen":"2022-07-28T22:06:37.255019449Z","kubernetes.io/config.source":"file","seccomp.security.alpha.kubernetes.io/pod":"runtime/default"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernete
s.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i [truncated 4974 chars]
	I0728 15:11:07.765682   20887 request.go:533] Waited for 195.39648ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:07.765713   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923
	I0728 15:11:07.765717   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:07.765723   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:07.765728   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:07.768657   20887 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0728 15:11:07.768668   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:07.768674   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:07 GMT
	I0728 15:11:07.768678   20887 round_trippers.go:580]     Audit-Id: b9f785af-8ceb-49b1-96e3-e5b38fb92ac1
	I0728 15:11:07.768684   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:07.768688   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:07.768693   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:07.768698   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:07.768750   20887 request.go:1073] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"ma
nager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-07 [truncated 5308 chars]
	I0728 15:11:07.768943   20887 pod_ready.go:92] pod "kube-scheduler-multinode-20220728150610-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:11:07.768949   20887 pod_ready.go:81] duration metric: took 399.816045ms waiting for pod "kube-scheduler-multinode-20220728150610-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:11:07.768956   20887 pod_ready.go:38] duration metric: took 22.266341127s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:11:07.768970   20887 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:11:07.769018   20887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:11:07.777756   20887 command_runner.go:130] > 1658
	I0728 15:11:07.778465   20887 api_server.go:71] duration metric: took 22.485150166s to wait for apiserver process to appear ...
	I0728 15:11:07.778478   20887 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:11:07.778485   20887 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56607/healthz ...
	I0728 15:11:07.783449   20887 api_server.go:266] https://127.0.0.1:56607/healthz returned 200:
	ok
	I0728 15:11:07.783478   20887 round_trippers.go:463] GET https://127.0.0.1:56607/version
	I0728 15:11:07.783482   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:07.783489   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:07.783495   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:07.784562   20887 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0728 15:11:07.784572   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:07.784577   20887 round_trippers.go:580]     Audit-Id: 429bd255-1804-4dd6-bd12-f35136aeb1c7
	I0728 15:11:07.784582   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:07.784587   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:07.784592   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:07.784596   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:07.784600   20887 round_trippers.go:580]     Content-Length: 263
	I0728 15:11:07.784607   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:07 GMT
	I0728 15:11:07.784656   20887 request.go:1073] Response Body: {
	  "major": "1",
	  "minor": "24",
	  "gitVersion": "v1.24.3",
	  "gitCommit": "aef86a93758dc3cb2c658dd9657ab4ad4afc21cb",
	  "gitTreeState": "clean",
	  "buildDate": "2022-07-13T14:23:26Z",
	  "goVersion": "go1.18.3",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0728 15:11:07.784682   20887 api_server.go:140] control plane version: v1.24.3
	I0728 15:11:07.784688   20887 api_server.go:130] duration metric: took 6.205662ms to wait for apiserver health ...
	I0728 15:11:07.784692   20887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:11:07.963798   20887 request.go:533] Waited for 179.065632ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:11:07.963873   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:11:07.963881   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:07.963892   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:07.963904   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:07.969192   20887 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0728 15:11:07.969203   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:07.969211   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:07.969218   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:07.969223   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:07.969228   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:07.969234   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:07 GMT
	I0728 15:11:07.969242   20887 round_trippers.go:580]     Audit-Id: 24815efd-3433-4082-8bfe-d6b5780c1657
	I0728 15:11:07.970066   20887 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"801"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"797","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},
"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".": [truncated 84989 chars]
	I0728 15:11:07.971935   20887 system_pods.go:59] 12 kube-system pods found
	I0728 15:11:07.971945   20887 system_pods.go:61] "coredns-6d4b75cb6d-dfxk7" [ea8a6018-c281-45ec-bbb7-19f2988aa884] Running
	I0728 15:11:07.971950   20887 system_pods.go:61] "etcd-multinode-20220728150610-12923" [2d4683e7-2e93-41c5-af51-5181a7c29edd] Running
	I0728 15:11:07.971953   20887 system_pods.go:61] "kindnet-52mvf" [ef5b2400-09e0-4d0c-98b9-d520fd42e827] Running
	I0728 15:11:07.971958   20887 system_pods.go:61] "kindnet-tlp2m" [b535556f-fbbe-4220-9037-5016f1b8fb51] Running
	I0728 15:11:07.971961   20887 system_pods.go:61] "kindnet-v5hq8" [3410f1c1-9947-4a08-8503-660caf65dc5c] Running
	I0728 15:11:07.971965   20887 system_pods.go:61] "kube-apiserver-multinode-20220728150610-12923" [34425f5f-5cbc-4e7c-89b3-e4758c44f162] Running
	I0728 15:11:07.971969   20887 system_pods.go:61] "kube-controller-manager-multinode-20220728150610-12923" [92841ab9-f773-435e-a133-794e0d8e0cef] Running
	I0728 15:11:07.971973   20887 system_pods.go:61] "kube-proxy-bxdk6" [befca8fa-aef6-415a-b033-8522067db320] Running
	I0728 15:11:07.971977   20887 system_pods.go:61] "kube-proxy-cdz7z" [f9727653-ed51-43f3-95ad-fd2f5fb0ac6e] Running
	I0728 15:11:07.971981   20887 system_pods.go:61] "kube-proxy-cn9x2" [813dc8a0-2ea3-4ee9-83ce-fe09ccf38295] Running
	I0728 15:11:07.971985   20887 system_pods.go:61] "kube-scheduler-multinode-20220728150610-12923" [ef5d84ce-4249-4af0-b1be-7a3d7f8c2205] Running
	I0728 15:11:07.971990   20887 system_pods.go:61] "storage-provisioner" [29238934-2c0b-4262-80ff-12975d44a715] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 15:11:07.971993   20887 system_pods.go:74] duration metric: took 187.299378ms to wait for pod list to return data ...
	I0728 15:11:07.971997   20887 default_sa.go:34] waiting for default service account to be created ...
	I0728 15:11:08.164659   20887 request.go:533] Waited for 192.538868ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/default/serviceaccounts
	I0728 15:11:08.164703   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/default/serviceaccounts
	I0728 15:11:08.164711   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:08.164722   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:08.164733   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:08.168638   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:08.168656   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:08.168664   20887 round_trippers.go:580]     Audit-Id: 56225654-3f8d-4ee0-a172-4263a275cd06
	I0728 15:11:08.168671   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:08.168679   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:08.168686   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:08.168693   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:08.168698   20887 round_trippers.go:580]     Content-Length: 261
	I0728 15:11:08.168726   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:08 GMT
	I0728 15:11:08.168747   20887 request.go:1073] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"801"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"d9af6ce9-3a6c-49bf-9e9c-59cda36b759c","resourceVersion":"306","creationTimestamp":"2022-07-28T22:06:49Z"}}]}
	I0728 15:11:08.168903   20887 default_sa.go:45] found service account: "default"
	I0728 15:11:08.168912   20887 default_sa.go:55] duration metric: took 196.912334ms for default service account to be created ...
	I0728 15:11:08.168918   20887 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 15:11:08.365709   20887 request.go:533] Waited for 196.734109ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:11:08.365797   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/namespaces/kube-system/pods
	I0728 15:11:08.365805   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:08.365845   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:08.365867   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:08.372156   20887 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0728 15:11:08.372176   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:08.372206   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:08 GMT
	I0728 15:11:08.372217   20887 round_trippers.go:580]     Audit-Id: 339079bb-6191-48db-8e1f-28f54811a523
	I0728 15:11:08.372235   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:08.372252   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:08.372268   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:08.372282   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:08.373463   20887 request.go:1073] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"801"},"items":[{"metadata":{"name":"coredns-6d4b75cb6d-dfxk7","generateName":"coredns-6d4b75cb6d-","namespace":"kube-system","uid":"ea8a6018-c281-45ec-bbb7-19f2988aa884","resourceVersion":"797","creationTimestamp":"2022-07-28T22:06:50Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"6d4b75cb6d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-6d4b75cb6d","uid":"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-07-28T22:06:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5e6a6a42-c37a-4fe1-ab1f-1c42152a5524\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},
"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".": [truncated 84989 chars]
	I0728 15:11:08.375305   20887 system_pods.go:86] 12 kube-system pods found
	I0728 15:11:08.375316   20887 system_pods.go:89] "coredns-6d4b75cb6d-dfxk7" [ea8a6018-c281-45ec-bbb7-19f2988aa884] Running
	I0728 15:11:08.375320   20887 system_pods.go:89] "etcd-multinode-20220728150610-12923" [2d4683e7-2e93-41c5-af51-5181a7c29edd] Running
	I0728 15:11:08.375325   20887 system_pods.go:89] "kindnet-52mvf" [ef5b2400-09e0-4d0c-98b9-d520fd42e827] Running
	I0728 15:11:08.375329   20887 system_pods.go:89] "kindnet-tlp2m" [b535556f-fbbe-4220-9037-5016f1b8fb51] Running
	I0728 15:11:08.375332   20887 system_pods.go:89] "kindnet-v5hq8" [3410f1c1-9947-4a08-8503-660caf65dc5c] Running
	I0728 15:11:08.375336   20887 system_pods.go:89] "kube-apiserver-multinode-20220728150610-12923" [34425f5f-5cbc-4e7c-89b3-e4758c44f162] Running
	I0728 15:11:08.375340   20887 system_pods.go:89] "kube-controller-manager-multinode-20220728150610-12923" [92841ab9-f773-435e-a133-794e0d8e0cef] Running
	I0728 15:11:08.375344   20887 system_pods.go:89] "kube-proxy-bxdk6" [befca8fa-aef6-415a-b033-8522067db320] Running
	I0728 15:11:08.375347   20887 system_pods.go:89] "kube-proxy-cdz7z" [f9727653-ed51-43f3-95ad-fd2f5fb0ac6e] Running
	I0728 15:11:08.375350   20887 system_pods.go:89] "kube-proxy-cn9x2" [813dc8a0-2ea3-4ee9-83ce-fe09ccf38295] Running
	I0728 15:11:08.375354   20887 system_pods.go:89] "kube-scheduler-multinode-20220728150610-12923" [ef5d84ce-4249-4af0-b1be-7a3d7f8c2205] Running
	I0728 15:11:08.375359   20887 system_pods.go:89] "storage-provisioner" [29238934-2c0b-4262-80ff-12975d44a715] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 15:11:08.375363   20887 system_pods.go:126] duration metric: took 206.443876ms to wait for k8s-apps to be running ...
	I0728 15:11:08.375368   20887 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 15:11:08.375418   20887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:11:08.384763   20887 system_svc.go:56] duration metric: took 9.390628ms WaitForService to wait for kubelet.
	I0728 15:11:08.384775   20887 kubeadm.go:572] duration metric: took 23.091466866s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0728 15:11:08.384792   20887 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:11:08.563801   20887 request.go:533] Waited for 178.886144ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:56607/api/v1/nodes
	I0728 15:11:08.563847   20887 round_trippers.go:463] GET https://127.0.0.1:56607/api/v1/nodes
	I0728 15:11:08.563855   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:08.563868   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:08.563877   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:08.567623   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:08.567636   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:08.567644   20887 round_trippers.go:580]     Audit-Id: 8bcc8759-8301-44ce-9f23-dea79764f4d7
	I0728 15:11:08.567653   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:08.567658   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:08.567663   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:08.567667   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:08.567671   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:08 GMT
	I0728 15:11:08.567883   20887 request.go:1073] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"801"},"items":[{"metadata":{"name":"multinode-20220728150610-12923","uid":"e7178011-9243-4df2-bb51-695b3a702105","resourceVersion":"689","creationTimestamp":"2022-07-28T22:06:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-20220728150610-12923","kubernetes.io/os":"linux","minikube.k8s.io/commit":"363f4186470802814a32480695fe2a353fd5f551","minikube.k8s.io/name":"multinode-20220728150610-12923","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_07_28T15_06_38_0700","minikube.k8s.io/version":"v1.26.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-m
anaged-attach-detach":"true"},"managedFields":[{"manager":"kubelet","op [truncated 16208 chars]
	I0728 15:11:08.568289   20887 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:11:08.568297   20887 node_conditions.go:123] node cpu capacity is 6
	I0728 15:11:08.568305   20887 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:11:08.568308   20887 node_conditions.go:123] node cpu capacity is 6
	I0728 15:11:08.568311   20887 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:11:08.568315   20887 node_conditions.go:123] node cpu capacity is 6
	I0728 15:11:08.568318   20887 node_conditions.go:105] duration metric: took 183.523754ms to run NodePressure ...
	I0728 15:11:08.568326   20887 start.go:216] waiting for startup goroutines ...
	I0728 15:11:08.568988   20887 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:11:08.569053   20887 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/config.json ...
	I0728 15:11:08.612976   20887 out.go:177] * Starting worker node multinode-20220728150610-12923-m02 in cluster multinode-20220728150610-12923
	I0728 15:11:08.634912   20887 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:11:08.656819   20887 out.go:177] * Pulling base image ...
	I0728 15:11:08.677880   20887 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:11:08.677897   20887 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:11:08.677903   20887 cache.go:57] Caching tarball of preloaded images
	I0728 15:11:08.677993   20887 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:11:08.678003   20887 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 15:11:08.678387   20887 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/config.json ...
	I0728 15:11:08.742068   20887 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:11:08.742081   20887 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:11:08.742090   20887 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:11:08.742151   20887 start.go:370] acquiring machines lock for multinode-20220728150610-12923-m02: {Name:mkeb9492df24fdad2e36a2cb175959a1c4df7525 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:11:08.742219   20887 start.go:374] acquired machines lock for "multinode-20220728150610-12923-m02" in 56.341µs
	I0728 15:11:08.742235   20887 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:11:08.742240   20887 fix.go:55] fixHost starting: m02
	I0728 15:11:08.742457   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923-m02 --format={{.State.Status}}
	I0728 15:11:08.805899   20887 fix.go:103] recreateIfNeeded on multinode-20220728150610-12923-m02: state=Stopped err=<nil>
	W0728 15:11:08.805923   20887 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:11:08.849467   20887 out.go:177] * Restarting existing docker container for "multinode-20220728150610-12923-m02" ...
	I0728 15:11:08.870781   20887 cli_runner.go:164] Run: docker start multinode-20220728150610-12923-m02
	I0728 15:11:09.219421   20887 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923-m02 --format={{.State.Status}}
	I0728 15:11:09.285547   20887 kic.go:415] container "multinode-20220728150610-12923-m02" state is running.
	I0728 15:11:09.286384   20887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923-m02
	I0728 15:11:09.354382   20887 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/config.json ...
	I0728 15:11:09.354781   20887 machine.go:88] provisioning docker machine ...
	I0728 15:11:09.354797   20887 ubuntu.go:169] provisioning hostname "multinode-20220728150610-12923-m02"
	I0728 15:11:09.354857   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:09.482595   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:11:09.482758   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56641 <nil> <nil>}
	I0728 15:11:09.482771   20887 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-20220728150610-12923-m02 && echo "multinode-20220728150610-12923-m02" | sudo tee /etc/hostname
	I0728 15:11:09.609831   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-20220728150610-12923-m02
	
	I0728 15:11:09.609997   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:09.675696   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:11:09.676000   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56641 <nil> <nil>}
	I0728 15:11:09.676035   20887 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-20220728150610-12923-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-20220728150610-12923-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-20220728150610-12923-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:11:09.795111   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:11:09.795132   20887 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:11:09.795150   20887 ubuntu.go:177] setting up certificates
	I0728 15:11:09.795159   20887 provision.go:83] configureAuth start
	I0728 15:11:09.795237   20887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923-m02
	I0728 15:11:09.861999   20887 provision.go:138] copyHostCerts
	I0728 15:11:09.862061   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:11:09.862129   20887 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:11:09.862138   20887 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:11:09.862232   20887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:11:09.862391   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:11:09.862429   20887 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:11:09.862434   20887 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:11:09.862496   20887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:11:09.862612   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:11:09.862637   20887 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:11:09.862642   20887 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:11:09.862704   20887 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:11:09.862823   20887 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.multinode-20220728150610-12923-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-20220728150610-12923-m02]
	I0728 15:11:09.936967   20887 provision.go:172] copyRemoteCerts
	I0728 15:11:09.937021   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:11:09.937074   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:10.001888   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56641 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923-m02/id_rsa Username:docker}
	I0728 15:11:10.090024   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0728 15:11:10.090103   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0728 15:11:10.123659   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0728 15:11:10.123756   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 15:11:10.142514   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0728 15:11:10.142586   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:11:10.167080   20887 provision.go:86] duration metric: configureAuth took 371.914455ms
	I0728 15:11:10.167094   20887 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:11:10.167288   20887 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:11:10.167365   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:10.231636   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:11:10.231792   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56641 <nil> <nil>}
	I0728 15:11:10.231801   20887 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:11:10.350238   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:11:10.350250   20887 ubuntu.go:71] root file system type: overlay
	I0728 15:11:10.350365   20887 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:11:10.350874   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:10.417112   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:11:10.417255   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56641 <nil> <nil>}
	I0728 15:11:10.417304   20887 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:11:10.548474   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:11:10.548561   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:10.612360   20887 main.go:134] libmachine: Using SSH client type: native
	I0728 15:11:10.612516   20887 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56641 <nil> <nil>}
	I0728 15:11:10.612530   20887 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:11:10.734617   20887 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:11:10.734633   20887 machine.go:91] provisioned docker machine in 1.379858272s
	I0728 15:11:10.734640   20887 start.go:307] post-start starting for "multinode-20220728150610-12923-m02" (driver="docker")
	I0728 15:11:10.734644   20887 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:11:10.734732   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:11:10.734782   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:10.800798   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56641 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923-m02/id_rsa Username:docker}
	I0728 15:11:10.888928   20887 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:11:10.892245   20887 command_runner.go:130] > NAME="Ubuntu"
	I0728 15:11:10.892261   20887 command_runner.go:130] > VERSION="20.04.4 LTS (Focal Fossa)"
	I0728 15:11:10.892268   20887 command_runner.go:130] > ID=ubuntu
	I0728 15:11:10.892278   20887 command_runner.go:130] > ID_LIKE=debian
	I0728 15:11:10.892285   20887 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.4 LTS"
	I0728 15:11:10.892291   20887 command_runner.go:130] > VERSION_ID="20.04"
	I0728 15:11:10.892296   20887 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0728 15:11:10.892303   20887 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0728 15:11:10.892308   20887 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0728 15:11:10.892316   20887 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0728 15:11:10.892324   20887 command_runner.go:130] > VERSION_CODENAME=focal
	I0728 15:11:10.892329   20887 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0728 15:11:10.892427   20887 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:11:10.892445   20887 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:11:10.892452   20887 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:11:10.892458   20887 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:11:10.892464   20887 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:11:10.892574   20887 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:11:10.892704   20887 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:11:10.892712   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> /etc/ssl/certs/129232.pem
	I0728 15:11:10.892846   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:11:10.900165   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:11:10.916534   20887 start.go:310] post-start completed in 181.884315ms
	I0728 15:11:10.916602   20887 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:11:10.916655   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:10.980902   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56641 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923-m02/id_rsa Username:docker}
	I0728 15:11:11.065850   20887 command_runner.go:130] > 12%!
	(MISSING)I0728 15:11:11.065904   20887 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:11:11.070247   20887 command_runner.go:130] > 49G
	I0728 15:11:11.070577   20887 fix.go:57] fixHost completed within 2.328355462s
	I0728 15:11:11.070597   20887 start.go:82] releasing machines lock for "multinode-20220728150610-12923-m02", held for 2.328384474s
	I0728 15:11:11.070676   20887 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923-m02
	I0728 15:11:11.155177   20887 out.go:177] * Found network options:
	I0728 15:11:11.176926   20887 out.go:177]   - NO_PROXY=192.168.58.2
	W0728 15:11:11.197862   20887 proxy.go:118] fail to check proxy env: Error ip not in block
	W0728 15:11:11.197906   20887 proxy.go:118] fail to check proxy env: Error ip not in block
	I0728 15:11:11.198135   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 15:11:11.198146   20887 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:11:11.198191   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:11.198213   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:11:11.266458   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56641 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923-m02/id_rsa Username:docker}
	I0728 15:11:11.266597   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56641 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923-m02/id_rsa Username:docker}
	I0728 15:11:11.354149   20887 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I0728 15:11:11.369112   20887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:11:11.536575   20887 command_runner.go:130] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
	I0728 15:11:11.536586   20887 command_runner.go:130] > <TITLE>302 Moved</TITLE></HEAD><BODY>
	I0728 15:11:11.536591   20887 command_runner.go:130] > <H1>302 Moved</H1>
	I0728 15:11:11.536594   20887 command_runner.go:130] > The document has moved
	I0728 15:11:11.536598   20887 command_runner.go:130] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
	I0728 15:11:11.536601   20887 command_runner.go:130] > </BODY></HTML>
	I0728 15:11:11.537900   20887 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0728 15:11:11.632120   20887 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:11:11.642068   20887 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0728 15:11:11.642078   20887 command_runner.go:130] > [Unit]
	I0728 15:11:11.642083   20887 command_runner.go:130] > Description=Docker Application Container Engine
	I0728 15:11:11.642087   20887 command_runner.go:130] > Documentation=https://docs.docker.com
	I0728 15:11:11.642090   20887 command_runner.go:130] > BindsTo=containerd.service
	I0728 15:11:11.642095   20887 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0728 15:11:11.642099   20887 command_runner.go:130] > Wants=network-online.target
	I0728 15:11:11.642105   20887 command_runner.go:130] > Requires=docker.socket
	I0728 15:11:11.642109   20887 command_runner.go:130] > StartLimitBurst=3
	I0728 15:11:11.642112   20887 command_runner.go:130] > StartLimitIntervalSec=60
	I0728 15:11:11.642115   20887 command_runner.go:130] > [Service]
	I0728 15:11:11.642119   20887 command_runner.go:130] > Type=notify
	I0728 15:11:11.642123   20887 command_runner.go:130] > Restart=on-failure
	I0728 15:11:11.642132   20887 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0728 15:11:11.642140   20887 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0728 15:11:11.642146   20887 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0728 15:11:11.642152   20887 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0728 15:11:11.642158   20887 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0728 15:11:11.642163   20887 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0728 15:11:11.642170   20887 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0728 15:11:11.642176   20887 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0728 15:11:11.642187   20887 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0728 15:11:11.642193   20887 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0728 15:11:11.642197   20887 command_runner.go:130] > ExecStart=
	I0728 15:11:11.642209   20887 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0728 15:11:11.642217   20887 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0728 15:11:11.642222   20887 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0728 15:11:11.642228   20887 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0728 15:11:11.642231   20887 command_runner.go:130] > LimitNOFILE=infinity
	I0728 15:11:11.642234   20887 command_runner.go:130] > LimitNPROC=infinity
	I0728 15:11:11.642238   20887 command_runner.go:130] > LimitCORE=infinity
	I0728 15:11:11.642243   20887 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0728 15:11:11.642248   20887 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0728 15:11:11.642251   20887 command_runner.go:130] > TasksMax=infinity
	I0728 15:11:11.642255   20887 command_runner.go:130] > TimeoutStartSec=0
	I0728 15:11:11.642260   20887 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0728 15:11:11.642264   20887 command_runner.go:130] > Delegate=yes
	I0728 15:11:11.642268   20887 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0728 15:11:11.642271   20887 command_runner.go:130] > KillMode=process
	I0728 15:11:11.642280   20887 command_runner.go:130] > [Install]
	I0728 15:11:11.642284   20887 command_runner.go:130] > WantedBy=multi-user.target
	I0728 15:11:11.642298   20887 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:11:11.642350   20887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:11:11.651473   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:11:11.664796   20887 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 15:11:11.664809   20887 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0728 15:11:11.665635   20887 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:11:11.746043   20887 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:11:11.816441   20887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:11:11.892689   20887 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:11:12.114905   20887 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 15:11:12.188963   20887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:11:12.267111   20887 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 15:11:12.276414   20887 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 15:11:12.276482   20887 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 15:11:12.280028   20887 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0728 15:11:12.280038   20887 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0728 15:11:12.280046   20887 command_runner.go:130] > Device: 10002fh/1048623d	Inode: 134         Links: 1
	I0728 15:11:12.280052   20887 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0728 15:11:12.280058   20887 command_runner.go:130] > Access: 2022-07-28 22:11:11.566651516 +0000
	I0728 15:11:12.280067   20887 command_runner.go:130] > Modify: 2022-07-28 22:11:11.566651516 +0000
	I0728 15:11:12.280074   20887 command_runner.go:130] > Change: 2022-07-28 22:11:11.575651516 +0000
	I0728 15:11:12.280079   20887 command_runner.go:130] >  Birth: -
	I0728 15:11:12.280215   20887 start.go:471] Will wait 60s for crictl version
	I0728 15:11:12.280257   20887 ssh_runner.go:195] Run: sudo crictl version
	I0728 15:11:12.306008   20887 command_runner.go:130] > Version:  0.1.0
	I0728 15:11:12.306019   20887 command_runner.go:130] > RuntimeName:  docker
	I0728 15:11:12.306022   20887 command_runner.go:130] > RuntimeVersion:  20.10.17
	I0728 15:11:12.306026   20887 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I0728 15:11:12.307991   20887 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 15:11:12.308065   20887 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:11:12.340484   20887 command_runner.go:130] > 20.10.17
	I0728 15:11:12.343408   20887 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:11:12.376555   20887 command_runner.go:130] > 20.10.17
	I0728 15:11:12.422851   20887 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 15:11:12.443932   20887 out.go:177]   - env NO_PROXY=192.168.58.2
	I0728 15:11:12.464964   20887 cli_runner.go:164] Run: docker exec -t multinode-20220728150610-12923-m02 dig +short host.docker.internal
	I0728 15:11:12.583851   20887 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:11:12.583938   20887 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:11:12.588188   20887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:11:12.597159   20887 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923 for IP: 192.168.58.3
	I0728 15:11:12.597283   20887 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:11:12.597333   20887 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:11:12.597340   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0728 15:11:12.597360   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0728 15:11:12.597379   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0728 15:11:12.597400   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0728 15:11:12.597492   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:11:12.597529   20887 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:11:12.597541   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:11:12.597579   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:11:12.597611   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:11:12.597641   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:11:12.597707   20887 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:11:12.597736   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem -> /usr/share/ca-certificates/12923.pem
	I0728 15:11:12.597753   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> /usr/share/ca-certificates/129232.pem
	I0728 15:11:12.597768   20887 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:11:12.598120   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:11:12.615188   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:11:12.631915   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:11:12.649444   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:11:12.666887   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:11:12.684100   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:11:12.700512   20887 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:11:12.717728   20887 ssh_runner.go:195] Run: openssl version
	I0728 15:11:12.723406   20887 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0728 15:11:12.723541   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:11:12.731289   20887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:11:12.735144   20887 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:11:12.735164   20887 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:11:12.735201   20887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:11:12.740385   20887 command_runner.go:130] > b5213941
	I0728 15:11:12.740715   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:11:12.748119   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:11:12.756348   20887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:11:12.760441   20887 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:11:12.760461   20887 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:11:12.760498   20887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:11:12.765511   20887 command_runner.go:130] > 51391683
	I0728 15:11:12.765888   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:11:12.773286   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:11:12.781444   20887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:11:12.785347   20887 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:11:12.785386   20887 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:11:12.785432   20887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:11:12.790384   20887 command_runner.go:130] > 3ec20f2e
	I0728 15:11:12.790661   20887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:11:12.797946   20887 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:11:12.866746   20887 command_runner.go:130] > systemd
	I0728 15:11:12.869931   20887 cni.go:95] Creating CNI manager for ""
	I0728 15:11:12.869941   20887 cni.go:156] 3 nodes found, recommending kindnet
	I0728 15:11:12.869962   20887 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:11:12.869972   20887 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-20220728150610-12923 NodeName:multinode-20220728150610-12923-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:11:12.870069   20887 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-20220728150610-12923-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:11:12.870127   20887 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-20220728150610-12923-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728150610-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:11:12.870188   20887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 15:11:12.877349   20887 command_runner.go:130] > kubeadm
	I0728 15:11:12.877357   20887 command_runner.go:130] > kubectl
	I0728 15:11:12.877360   20887 command_runner.go:130] > kubelet
	I0728 15:11:12.878031   20887 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:11:12.878080   20887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0728 15:11:12.885505   20887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (496 bytes)
	I0728 15:11:12.898035   20887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:11:12.911674   20887 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:11:12.915275   20887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:11:12.924349   20887 host.go:66] Checking if "multinode-20220728150610-12923" exists ...
	I0728 15:11:12.924515   20887 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:11:12.924521   20887 start.go:285] JoinCluster: &{Name:multinode-20220728150610-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:multinode-20220728150610-12923 Namespace:default APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.168.58.4 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewe
r:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:11:12.924592   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0728 15:11:12.924636   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:11:12.989637   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:11:13.122618   20887 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 
	I0728 15:11:13.122659   20887 start.go:298] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:11:13.122679   20887 host.go:66] Checking if "multinode-20220728150610-12923" exists ...
	I0728 15:11:13.122920   20887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl drain multinode-20220728150610-12923-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0728 15:11:13.122975   20887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:11:13.188038   20887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56608 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:11:13.309811   20887 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0728 15:11:13.335024   20887 command_runner.go:130] ! WARNING: ignoring DaemonSet-managed Pods: kube-system/kindnet-v5hq8, kube-system/kube-proxy-bxdk6
	I0728 15:11:16.344328   20887 command_runner.go:130] > node/multinode-20220728150610-12923-m02 cordoned
	I0728 15:11:16.344348   20887 command_runner.go:130] > pod "busybox-d46db594c-vg2w2" has DeletionTimestamp older than 1 seconds, skipping
	I0728 15:11:16.344353   20887 command_runner.go:130] > node/multinode-20220728150610-12923-m02 drained
	I0728 15:11:16.344367   20887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl drain multinode-20220728150610-12923-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.221464776s)
	I0728 15:11:16.344376   20887 node.go:109] successfully drained node "m02"
	I0728 15:11:16.344693   20887 loader.go:372] Config loaded from file:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:11:16.344940   20887 kapi.go:59] client config for multinode-20220728150610-12923: &rest.Config{Host:"https://127.0.0.1:56607", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-20220728150610-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/multinode-2022072815061
0-12923/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:11:16.345183   20887 request.go:1073] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0728 15:11:16.345210   20887 round_trippers.go:463] DELETE https://127.0.0.1:56607/api/v1/nodes/multinode-20220728150610-12923-m02
	I0728 15:11:16.345214   20887 round_trippers.go:469] Request Headers:
	I0728 15:11:16.345220   20887 round_trippers.go:473]     Content-Type: application/json
	I0728 15:11:16.345228   20887 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0728 15:11:16.345233   20887 round_trippers.go:473]     Accept: application/json, */*
	I0728 15:11:16.348628   20887 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0728 15:11:16.348643   20887 round_trippers.go:577] Response Headers:
	I0728 15:11:16.348651   20887 round_trippers.go:580]     Audit-Id: 544b1357-f85a-4ec7-ab63-98f15a25bae7
	I0728 15:11:16.348658   20887 round_trippers.go:580]     Cache-Control: no-cache, private
	I0728 15:11:16.348665   20887 round_trippers.go:580]     Content-Type: application/json
	I0728 15:11:16.348672   20887 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d0de12d9-22ba-4e80-8051-a0caccf4bf30
	I0728 15:11:16.348682   20887 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: effbc6cb-264d-4435-bc4c-44808da3bc17
	I0728 15:11:16.348688   20887 round_trippers.go:580]     Content-Length: 185
	I0728 15:11:16.348693   20887 round_trippers.go:580]     Date: Thu, 28 Jul 2022 22:11:16 GMT
	I0728 15:11:16.348705   20887 request.go:1073] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-20220728150610-12923-m02","kind":"nodes","uid":"2d4cf78c-ed12-4ef8-9967-e85ae2ffd232"}}
	I0728 15:11:16.348725   20887 node.go:125] successfully deleted node "m02"
	I0728 15:11:16.348731   20887 start.go:302] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:11:16.348747   20887 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:11:16.348759   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02"
	I0728 15:11:16.408615   20887 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 15:11:16.524859   20887 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 15:11:16.524888   20887 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 15:11:16.543951   20887 command_runner.go:130] ! W0728 22:11:16.428638    1126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:11:16.543966   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0728 15:11:16.543988   20887 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0728 15:11:16.543998   20887 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0728 15:11:16.544005   20887 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0728 15:11:16.544012   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0728 15:11:16.544024   20887 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0728 15:11:16.544036   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0728 15:11:16.544075   20887 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:11:16.428638    1126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:16.544085   20887 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0728 15:11:16.544092   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force"
	I0728 15:11:16.577438   20887 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0728 15:11:16.577455   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:16.577478   20887 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:16.577513   20887 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:11:16.428638    1126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:27.624194   20887 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:11:27.624334   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02"
	I0728 15:11:27.658783   20887 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 15:11:27.762757   20887 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 15:11:27.762770   20887 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 15:11:27.780178   20887 command_runner.go:130] ! W0728 22:11:27.664907    1640 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:11:27.780192   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0728 15:11:27.780207   20887 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0728 15:11:27.780212   20887 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0728 15:11:27.780216   20887 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0728 15:11:27.780222   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0728 15:11:27.780232   20887 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0728 15:11:27.780240   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0728 15:11:27.780266   20887 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:11:27.664907    1640 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:27.780278   20887 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0728 15:11:27.780287   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force"
	I0728 15:11:27.816375   20887 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0728 15:11:27.816391   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:27.816407   20887 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:27.816419   20887 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:11:27.664907    1640 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:49.424452   20887 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:11:49.424543   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02"
	I0728 15:11:49.458511   20887 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 15:11:49.569686   20887 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 15:11:49.569700   20887 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 15:11:49.586634   20887 command_runner.go:130] ! W0728 22:11:49.458810    1871 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:11:49.586649   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0728 15:11:49.586657   20887 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0728 15:11:49.586667   20887 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0728 15:11:49.586677   20887 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0728 15:11:49.586683   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0728 15:11:49.586692   20887 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0728 15:11:49.586699   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0728 15:11:49.586729   20887 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:11:49.458810    1871 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:49.586737   20887 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0728 15:11:49.586749   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force"
	I0728 15:11:49.620524   20887 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0728 15:11:49.620538   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:49.620553   20887 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:11:49.620563   20887 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:11:49.458810    1871 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:15.825070   20887 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:12:15.834433   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02"
	I0728 15:12:15.867913   20887 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 15:12:15.973713   20887 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 15:12:15.973727   20887 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 15:12:15.991632   20887 command_runner.go:130] ! W0728 22:12:15.887398    2126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:12:15.991645   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0728 15:12:15.991653   20887 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0728 15:12:15.991658   20887 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0728 15:12:15.991662   20887 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0728 15:12:15.991668   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0728 15:12:15.991677   20887 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0728 15:12:15.991682   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0728 15:12:15.991710   20887 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:12:15.887398    2126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:15.991717   20887 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0728 15:12:15.991725   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force"
	I0728 15:12:16.026740   20887 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0728 15:12:16.026757   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:16.026777   20887 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:16.026789   20887 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:12:15.887398    2126 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:47.674673   20887 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:12:47.674715   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02"
	I0728 15:12:47.708880   20887 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 15:12:47.810500   20887 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 15:12:47.810513   20887 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 15:12:47.829278   20887 command_runner.go:130] ! W0728 22:12:47.729965    2439 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:12:47.829292   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0728 15:12:47.829301   20887 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0728 15:12:47.829305   20887 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0728 15:12:47.829311   20887 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0728 15:12:47.829318   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0728 15:12:47.829327   20887 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0728 15:12:47.829333   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0728 15:12:47.829358   20887 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:12:47.729965    2439 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:47.829367   20887 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0728 15:12:47.829374   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force"
	I0728 15:12:47.864820   20887 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0728 15:12:47.864837   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:47.864860   20887 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:12:47.864873   20887 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:12:47.729965    2439 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:13:34.676416   20887 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0728 15:13:34.676475   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02"
	I0728 15:13:34.711703   20887 command_runner.go:130] > [preflight] Running pre-flight checks
	I0728 15:13:34.817378   20887 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0728 15:13:34.817398   20887 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0728 15:13:34.834797   20887 command_runner.go:130] ! W0728 22:13:34.723396    2846 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0728 15:13:34.834811   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0728 15:13:34.834822   20887 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0728 15:13:34.834827   20887 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I0728 15:13:34.834832   20887 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0728 15:13:34.834837   20887 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0728 15:13:34.834848   20887 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I0728 15:13:34.834857   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E0728 15:13:34.834898   20887 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:13:34.723396    2846 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:13:34.834908   20887 start.go:311] resetting worker node "m02" before attempting to rejoin cluster...
	I0728 15:13:34.834917   20887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force"
	I0728 15:13:34.870155   20887 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I0728 15:13:34.870174   20887 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I0728 15:13:34.870194   20887 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I0728 15:13:34.870211   20887 start.go:287] JoinCluster complete in 2m21.947191876s
	I0728 15:13:34.892247   20887 out.go:177] 
	W0728 15:13:34.914357   20887 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zll88r.90idjitcu27vd1qa --discovery-token-ca-cert-hash sha256:c7b6608ca8e14dcd26f1726fbd6f346c70a25761e1351884be35598b84088a50 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-20220728150610-12923-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W0728 22:13:34.723396    2846 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-20220728150610-12923-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:13:34.914387   20887 out.go:239] * 
	W0728 15:13:34.915469   20887 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 15:13:34.978957   20887 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 22:10:17 UTC, end at Thu 2022-07-28 22:13:36 UTC. --
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[130]: time="2022-07-28T22:10:19.691419942Z" level=info msg="Daemon shutdown complete"
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[130]: time="2022-07-28T22:10:19.691490938Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 28 22:10:19 multinode-20220728150610-12923 systemd[1]: docker.service: Succeeded.
	Jul 28 22:10:19 multinode-20220728150610-12923 systemd[1]: Stopped Docker Application Container Engine.
	Jul 28 22:10:19 multinode-20220728150610-12923 systemd[1]: Starting Docker Application Container Engine...
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.743857257Z" level=info msg="Starting up"
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.745459536Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.745495403Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.745514932Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.745522306Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.746469796Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.746499756Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.746511409Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.746518971Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.749891018Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.754699310Z" level=info msg="Loading containers: start."
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.848895073Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.882464077Z" level=info msg="Loading containers: done."
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.891646086Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.891712479Z" level=info msg="Daemon has completed initialization"
	Jul 28 22:10:19 multinode-20220728150610-12923 systemd[1]: Started Docker Application Container Engine.
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.916736342Z" level=info msg="API listen on [::]:2376"
	Jul 28 22:10:19 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:10:19.919827158Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 28 22:11:01 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:11:01.866764921Z" level=error msg="collecting stats for 3d78554b9e6703f2b58b68a7c598c8b22f762926942e815ddb25439051836bd6: no metrics received"
	Jul 28 22:11:01 multinode-20220728150610-12923 dockerd[603]: time="2022-07-28T22:11:01.883330505Z" level=info msg="ignoring event" container=3d78554b9e6703f2b58b68a7c598c8b22f762926942e815ddb25439051836bd6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	dc01b2e9e94dc       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   af880bc4283ee
	3b3ed23384d82       6fb66cd78abfe                                                                                         3 minutes ago       Running             kindnet-cni               1                   39c7425173aff
	3d78554b9e670       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   af880bc4283ee
	0e5cdaa6af5e1       2ae1ba6417cbc                                                                                         3 minutes ago       Running             kube-proxy                1                   31c584bd977e8
	0ca5e42a76a94       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   9ce16a15dbf8c
	29c493c223357       a4ca41631cc7a                                                                                         3 minutes ago       Running             coredns                   1                   198f69107b0e2
	c0df34a7cab2c       3a5aa3a515f5d                                                                                         3 minutes ago       Running             kube-scheduler            1                   3b32246c0c72a
	dc73ba8daa9c9       aebe758cef4cd                                                                                         3 minutes ago       Running             etcd                      1                   e990ecbfcc97c
	19bceddf903be       586c112956dfc                                                                                         3 minutes ago       Running             kube-controller-manager   1                   2ff25e723571c
	8b0cf038003c1       d521dd763e2e3                                                                                         3 minutes ago       Running             kube-apiserver            1                   4485543ece1d1
	a9f751afaafaf       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago       Exited              busybox                   0                   4b4f4e1edf9e8
	89466f3f83060       a4ca41631cc7a                                                                                         6 minutes ago       Exited              coredns                   0                   50a595b779036
	7b9caab60a97a       kindest/kindnetd@sha256:39494477a3fa001aae716b704a8991f4f62d2ccf1aaaa65692da6c805b18856c              6 minutes ago       Exited              kindnet-cni               0                   0d0894f41f2a3
	848acc25a7d78       2ae1ba6417cbc                                                                                         6 minutes ago       Exited              kube-proxy                0                   4a96e7ffb1b4f
	8e2030fdbc79d       586c112956dfc                                                                                         7 minutes ago       Exited              kube-controller-manager   0                   3641ce6d4a532
	abab41f9a9046       aebe758cef4cd                                                                                         7 minutes ago       Exited              etcd                      0                   21e11a020b838
	9db2ba48d7a67       3a5aa3a515f5d                                                                                         7 minutes ago       Exited              kube-scheduler            0                   bb142f1efac98
	06994bc702bbd       d521dd763e2e3                                                                                         7 minutes ago       Exited              kube-apiserver            0                   e71a37402f1e3
	
	* 
	* ==> coredns [29c493c22335] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> coredns [89466f3f8306] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-20220728150610-12923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220728150610-12923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
	                    minikube.k8s.io/name=multinode-20220728150610-12923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_28T15_06_38_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 22:06:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220728150610-12923
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 22:13:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Jul 2022 22:10:30 +0000   Thu, 28 Jul 2022 22:06:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Jul 2022 22:10:30 +0000   Thu, 28 Jul 2022 22:06:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Jul 2022 22:10:30 +0000   Thu, 28 Jul 2022 22:06:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Jul 2022 22:10:30 +0000   Thu, 28 Jul 2022 22:07:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-20220728150610-12923
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                48a3d738-f24e-4a95-92ad-ce5484657fa2
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                      ------------  ----------  ---------------  -------------  ---
	  default                     busybox-d46db594c-jwp7z                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 coredns-6d4b75cb6d-dfxk7                                  100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     6m47s
	  kube-system                 etcd-multinode-20220728150610-12923                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         7m
	  kube-system                 kindnet-tlp2m                                             100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m47s
	  kube-system                 kube-apiserver-multinode-20220728150610-12923             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-controller-manager-multinode-20220728150610-12923    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 kube-proxy-cdz7z                                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 kube-scheduler-multinode-20220728150610-12923             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m
	  kube-system                 storage-provisioner                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m5s                   kube-proxy       
	  Normal  Starting                 6m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m10s (x5 over 7m11s)  kubelet          Node multinode-20220728150610-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m10s (x5 over 7m11s)  kubelet          Node multinode-20220728150610-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m10s (x4 over 7m11s)  kubelet          Node multinode-20220728150610-12923 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m                     kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m                     kubelet          Node multinode-20220728150610-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m                     kubelet          Node multinode-20220728150610-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m                     kubelet          Node multinode-20220728150610-12923 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m48s                  node-controller  Node multinode-20220728150610-12923 event: Registered Node multinode-20220728150610-12923 in Controller
	  Normal  NodeReady                6m19s                  kubelet          Node multinode-20220728150610-12923 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m12s (x8 over 3m12s)  kubelet          Node multinode-20220728150610-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m12s (x8 over 3m12s)  kubelet          Node multinode-20220728150610-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s (x7 over 3m12s)  kubelet          Node multinode-20220728150610-12923 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m56s                  node-controller  Node multinode-20220728150610-12923 event: Registered Node multinode-20220728150610-12923 in Controller
	
	
	Name:               multinode-20220728150610-12923-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220728150610-12923-m02
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 22:11:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220728150610-12923-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 22:13:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Jul 2022 22:11:16 +0000   Thu, 28 Jul 2022 22:11:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Jul 2022 22:11:16 +0000   Thu, 28 Jul 2022 22:11:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Jul 2022 22:11:16 +0000   Thu, 28 Jul 2022 22:11:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Jul 2022 22:11:16 +0000   Thu, 28 Jul 2022 22:11:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-20220728150610-12923-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                815b28dc-78c1-499e-8dc5-eb6de4c776f8
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-v5hq8       100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m52s
	  kube-system                 kube-proxy-bxdk6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m42s                  kube-proxy  
	  Normal  Starting                 2m18s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  5m52s (x8 over 6m5s)   kubelet     Node multinode-20220728150610-12923-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m52s (x8 over 6m5s)   kubelet     Node multinode-20220728150610-12923-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m27s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m21s (x7 over 2m27s)  kubelet     Node multinode-20220728150610-12923-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x7 over 2m27s)  kubelet     Node multinode-20220728150610-12923-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m27s)  kubelet     Node multinode-20220728150610-12923-m02 status is now: NodeHasSufficientPID
	
	
	Name:               multinode-20220728150610-12923-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-20220728150610-12923-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 22:09:25 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-20220728150610-12923-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 22:09:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 28 Jul 2022 22:09:35 +0000   Thu, 28 Jul 2022 22:11:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 28 Jul 2022 22:09:35 +0000   Thu, 28 Jul 2022 22:11:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 28 Jul 2022 22:09:35 +0000   Thu, 28 Jul 2022 22:11:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 28 Jul 2022 22:09:35 +0000   Thu, 28 Jul 2022 22:11:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.58.4
	  Hostname:    multinode-20220728150610-12923-m03
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                5950087e-691f-43a1-84fc-63cd7a5af112
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-d46db594c-lb4v6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kindnet-52mvf              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m58s
	  kube-system                 kube-proxy-cn9x2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m49s                  kube-proxy       
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m58s (x8 over 5m10s)  kubelet          Node multinode-20220728150610-12923-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s (x8 over 5m10s)  kubelet          Node multinode-20220728150610-12923-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m12s (x2 over 4m12s)  kubelet          Node multinode-20220728150610-12923-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x2 over 4m12s)  kubelet          Node multinode-20220728150610-12923-m03 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m12s (x2 over 4m12s)  kubelet          Node multinode-20220728150610-12923-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m2s                   kubelet          Node multinode-20220728150610-12923-m03 status is now: NodeReady
	  Normal  RegisteredNode           2m56s                  node-controller  Node multinode-20220728150610-12923-m03 event: Registered Node multinode-20220728150610-12923-m03 in Controller
	  Normal  NodeNotReady             2m16s                  node-controller  Node multinode-20220728150610-12923-m03 status is now: NodeNotReady
	
	* 
	* ==> dmesg <==
	* [  +0.001439] FS-Cache: O-key=[8] '377f2e0300000000'
	[  +0.001052] FS-Cache: N-cookie c=000000003d1587d4 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001741] FS-Cache: N-cookie d=00000000b3020e27 n=00000000e968409d
	[  +0.001451] FS-Cache: N-key=[8] '377f2e0300000000'
	[  +0.001921] FS-Cache: Duplicate cookie detected
	[  +0.001022] FS-Cache: O-cookie c=00000000fc272a13 [p=00000000a81fea9c fl=226 nc=0 na=1]
	[  +0.001775] FS-Cache: O-cookie d=00000000b3020e27 n=0000000005e7aa76
	[  +0.001457] FS-Cache: O-key=[8] '377f2e0300000000'
	[  +0.001097] FS-Cache: N-cookie c=000000003d1587d4 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001747] FS-Cache: N-cookie d=00000000b3020e27 n=00000000992429b3
	[  +0.001462] FS-Cache: N-key=[8] '377f2e0300000000'
	[  +3.054735] FS-Cache: Duplicate cookie detected
	[  +0.001042] FS-Cache: O-cookie c=00000000d2d2cc51 [p=00000000a81fea9c fl=226 nc=0 na=1]
	[  +0.001759] FS-Cache: O-cookie d=00000000b3020e27 n=0000000030e50417
	[  +0.001442] FS-Cache: O-key=[8] '367f2e0300000000'
	[  +0.001131] FS-Cache: N-cookie c=000000007bcf2158 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001760] FS-Cache: N-cookie d=00000000b3020e27 n=00000000d445df4c
	[  +0.001503] FS-Cache: N-key=[8] '367f2e0300000000'
	[  +0.439912] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=000000000a15bb65 [p=00000000a81fea9c fl=226 nc=0 na=1]
	[  +0.001773] FS-Cache: O-cookie d=00000000b3020e27 n=00000000a4d7a621
	[  +0.001447] FS-Cache: O-key=[8] '3e7f2e0300000000'
	[  +0.001103] FS-Cache: N-cookie c=000000001f485fd0 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001738] FS-Cache: N-cookie d=00000000b3020e27 n=000000000eab18f1
	[  +0.001440] FS-Cache: N-key=[8] '3e7f2e0300000000'
	
	* 
	* ==> etcd [abab41f9a904] <==
	* {"level":"info","ts":"2022-07-28T22:06:31.755Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:06:31.755Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:06:31.755Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-20220728150610-12923 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T22:06:31.755Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:06:31.755Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T22:06:31.755Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:06:31.803Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T22:06:31.803Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T22:06:31.804Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-07-28T22:07:25.974Z","caller":"traceutil/trace.go:171","msg":"trace[822898550] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"285.178283ms","start":"2022-07-28T22:07:25.689Z","end":"2022-07-28T22:07:25.974Z","steps":["trace[822898550] 'process raft request'  (duration: 284.479011ms)"],"step_count":1}
	{"level":"warn","ts":"2022-07-28T22:08:09.509Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"132.479475ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3238512847853225733 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/default/busybox-d46db594c-jwp7z\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/busybox-d46db594c-jwp7z\" value_size:1614 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-07-28T22:08:09.509Z","caller":"traceutil/trace.go:171","msg":"trace[1185220604] linearizableReadLoop","detail":"{readStateIndex:536; appliedIndex:533; }","duration":"131.578653ms","start":"2022-07-28T22:08:09.378Z","end":"2022-07-28T22:08:09.509Z","steps":["trace[1185220604] 'read index received'  (duration: 56.115309ms)","trace[1185220604] 'applied index is now lower than readState.Index'  (duration: 75.46263ms)"],"step_count":2}
	{"level":"info","ts":"2022-07-28T22:08:09.509Z","caller":"traceutil/trace.go:171","msg":"trace[1762933416] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"133.559976ms","start":"2022-07-28T22:08:09.376Z","end":"2022-07-28T22:08:09.509Z","steps":["trace[1762933416] 'compare'  (duration: 132.311696ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-28T22:08:09.509Z","caller":"traceutil/trace.go:171","msg":"trace[2107854108] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"133.56519ms","start":"2022-07-28T22:08:09.376Z","end":"2022-07-28T22:08:09.509Z","steps":["trace[2107854108] 'process raft request'  (duration: 133.381766ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-28T22:08:09.510Z","caller":"traceutil/trace.go:171","msg":"trace[1736338045] transaction","detail":"{read_only:false; response_revision:504; number_of_response:1; }","duration":"134.054693ms","start":"2022-07-28T22:08:09.376Z","end":"2022-07-28T22:08:09.510Z","steps":["trace[1736338045] 'process raft request'  (duration: 133.251007ms)"],"step_count":1}
	{"level":"warn","ts":"2022-07-28T22:08:09.510Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"132.233903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/busybox\" ","response":"range_response_count:1 size:2627"}
	{"level":"info","ts":"2022-07-28T22:08:09.510Z","caller":"traceutil/trace.go:171","msg":"trace[97274705] range","detail":"{range_begin:/registry/deployments/default/busybox; range_end:; response_count:1; response_revision:505; }","duration":"132.623574ms","start":"2022-07-28T22:08:09.378Z","end":"2022-07-28T22:08:09.510Z","steps":["trace[97274705] 'agreement among raft nodes before linearized reading'  (duration: 132.214615ms)"],"step_count":1}
	{"level":"info","ts":"2022-07-28T22:09:39.499Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-28T22:09:39.500Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"multinode-20220728150610-12923","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/07/28 22:09:39 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/28 22:09:39 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-28T22:09:39.507Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-07-28T22:09:39.508Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-07-28T22:09:39.510Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-07-28T22:09:39.510Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"multinode-20220728150610-12923","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> etcd [dc73ba8daa9c] <==
	* {"level":"info","ts":"2022-07-28T22:10:26.572Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-07-28T22:10:26.572Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-28T22:10:26.572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-07-28T22:10:26.572Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-07-28T22:10:26.573Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:10:26.573Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:10:26.573Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-28T22:10:26.574Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-07-28T22:10:26.574Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-07-28T22:10:26.574Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T22:10:26.574Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T22:10:27.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-07-28T22:10:27.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-07-28T22:10:27.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-07-28T22:10:27.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-07-28T22:10:27.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-07-28T22:10:27.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-07-28T22:10:27.662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-07-28T22:10:27.665Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-20220728150610-12923 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T22:10:27.665Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:10:27.665Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:10:27.665Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T22:10:27.666Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T22:10:27.666Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T22:10:27.667Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:13:37 up 34 min,  0 users,  load average: 0.31, 0.45, 0.43
	Linux multinode-20220728150610-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [06994bc702bb] <==
	* W0728 22:09:48.679598       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:48.681210       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:48.686526       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:48.693980       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:48.723962       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:48.936510       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:48.983882       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:48.988720       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.039408       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.051986       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.065612       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.072086       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.114962       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.207947       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.220920       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.236941       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.239534       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.239796       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.309441       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.320419       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.366670       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.386490       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.515932       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.522602       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:09:49.537243       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [8b0cf038003c] <==
	* I0728 22:10:29.339854       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0728 22:10:29.339883       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I0728 22:10:29.339895       1 controller.go:83] Starting OpenAPI AggregationController
	I0728 22:10:29.339912       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0728 22:10:29.351566       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0728 22:10:29.351983       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0728 22:10:29.378366       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0728 22:10:29.381272       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0728 22:10:29.391959       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0728 22:10:29.432899       1 cache.go:39] Caches are synced for autoregister controller
	I0728 22:10:29.457471       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0728 22:10:29.457570       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0728 22:10:29.457577       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0728 22:10:29.457639       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0728 22:10:29.457932       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0728 22:10:30.129361       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0728 22:10:30.353917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0728 22:10:31.302105       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0728 22:10:31.493664       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0728 22:10:31.505553       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0728 22:10:31.662275       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0728 22:10:31.669630       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0728 22:10:31.798298       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0728 22:10:41.778715       1 controller.go:611] quota admission added evaluator for: endpoints
	I0728 22:10:41.803005       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [19bceddf903b] <==
	* I0728 22:10:41.800336       1 shared_informer.go:262] Caches are synced for deployment
	I0728 22:10:41.801790       1 shared_informer.go:262] Caches are synced for TTL
	I0728 22:10:41.805707       1 shared_informer.go:262] Caches are synced for HPA
	I0728 22:10:41.807185       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0728 22:10:41.808346       1 shared_informer.go:262] Caches are synced for daemon sets
	I0728 22:10:41.808589       1 shared_informer.go:262] Caches are synced for persistent volume
	I0728 22:10:41.810433       1 shared_informer.go:262] Caches are synced for crt configmap
	I0728 22:10:41.818694       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0728 22:10:41.994194       1 shared_informer.go:262] Caches are synced for disruption
	I0728 22:10:41.994255       1 disruption.go:371] Sending events to api server.
	I0728 22:10:42.007112       1 shared_informer.go:262] Caches are synced for resource quota
	I0728 22:10:42.012430       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0728 22:10:42.021606       1 shared_informer.go:262] Caches are synced for resource quota
	I0728 22:10:42.435580       1 shared_informer.go:262] Caches are synced for garbage collector
	I0728 22:10:42.515697       1 shared_informer.go:262] Caches are synced for garbage collector
	I0728 22:10:42.515732       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0728 22:11:13.361400       1 event.go:294] "Event occurred" object="default/busybox-d46db594c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-d46db594c-lb4v6"
	W0728 22:11:16.369252       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728150610-12923-m03 node
	W0728 22:11:16.445195       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220728150610-12923-m02" does not exist
	W0728 22:11:16.445811       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728150610-12923-m02 node
	I0728 22:11:16.449452       1 range_allocator.go:374] Set node multinode-20220728150610-12923-m02 PodCIDR to [10.244.1.0/24]
	W0728 22:11:21.762199       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728150610-12923-m02 node
	I0728 22:11:21.762706       1 event.go:294] "Event occurred" object="multinode-20220728150610-12923-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-20220728150610-12923-m03 status is now: NodeNotReady"
	I0728 22:11:21.766322       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-cn9x2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0728 22:11:21.770157       1 event.go:294] "Event occurred" object="kube-system/kindnet-52mvf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-controller-manager [8e2030fdbc79] <==
	* I0728 22:06:50.393994       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-4hghp"
	I0728 22:07:19.302811       1 node_lifecycle_controller.go:1192] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	W0728 22:07:45.181735       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220728150610-12923-m02" does not exist
	I0728 22:07:45.186703       1 range_allocator.go:374] Set node multinode-20220728150610-12923-m02 PodCIDR to [10.244.1.0/24]
	I0728 22:07:45.188257       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-v5hq8"
	I0728 22:07:45.191677       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bxdk6"
	I0728 22:07:49.288514       1 event.go:294] "Event occurred" object="multinode-20220728150610-12923-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20220728150610-12923-m02 event: Registered Node multinode-20220728150610-12923-m02 in Controller"
	W0728 22:07:49.288676       1 node_lifecycle_controller.go:1014] Missing timestamp for Node multinode-20220728150610-12923-m02. Assuming now as a timestamp.
	W0728 22:08:05.351024       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728150610-12923-m02 node
	I0728 22:08:09.370397       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-d46db594c to 2"
	I0728 22:08:09.374650       1 event.go:294] "Event occurred" object="default/busybox-d46db594c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-d46db594c-vg2w2"
	I0728 22:08:09.513525       1 event.go:294] "Event occurred" object="default/busybox-d46db594c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-d46db594c-jwp7z"
	W0728 22:08:39.808996       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728150610-12923-m02 node
	W0728 22:08:39.809110       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220728150610-12923-m03" does not exist
	I0728 22:08:39.819766       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cn9x2"
	I0728 22:08:39.819911       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-52mvf"
	I0728 22:08:39.820534       1 range_allocator.go:374] Set node multinode-20220728150610-12923-m03 PodCIDR to [10.244.2.0/24]
	W0728 22:08:44.262808       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728150610-12923-m02 node
	W0728 22:08:44.282650       1 node_lifecycle_controller.go:1014] Missing timestamp for Node multinode-20220728150610-12923-m03. Assuming now as a timestamp.
	I0728 22:08:44.282703       1 event.go:294] "Event occurred" object="multinode-20220728150610-12923-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-20220728150610-12923-m03 event: Registered Node multinode-20220728150610-12923-m03 in Controller"
	W0728 22:09:24.749658       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728150610-12923-m02 node
	W0728 22:09:25.591329       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-20220728150610-12923-m03" does not exist
	W0728 22:09:25.591665       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728150610-12923-m02 node
	I0728 22:09:25.595954       1 range_allocator.go:374] Set node multinode-20220728150610-12923-m03 PodCIDR to [10.244.3.0/24]
	W0728 22:09:35.693785       1 topologycache.go:199] Can't get CPU or zone information for multinode-20220728150610-12923-m02 node
	
	* 
	* ==> kube-proxy [0e5cdaa6af5e] <==
	* I0728 22:10:31.767170       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0728 22:10:31.767339       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0728 22:10:31.767372       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 22:10:31.795536       1 server_others.go:206] "Using iptables Proxier"
	I0728 22:10:31.795575       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 22:10:31.795582       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 22:10:31.795592       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 22:10:31.795614       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:10:31.795796       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:10:31.795926       1 server.go:661] "Version info" version="v1.24.3"
	I0728 22:10:31.795935       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:10:31.796808       1 config.go:444] "Starting node config controller"
	I0728 22:10:31.796886       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 22:10:31.797083       1 config.go:226] "Starting endpoint slice config controller"
	I0728 22:10:31.797089       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 22:10:31.796720       1 config.go:317] "Starting service config controller"
	I0728 22:10:31.797679       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 22:10:31.896952       1 shared_informer.go:262] Caches are synced for node config
	I0728 22:10:31.897110       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 22:10:31.898119       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-proxy [848acc25a7d7] <==
	* I0728 22:06:50.790547       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0728 22:06:50.790606       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0728 22:06:50.790644       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 22:06:50.816091       1 server_others.go:206] "Using iptables Proxier"
	I0728 22:06:50.816130       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 22:06:50.816137       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 22:06:50.816146       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 22:06:50.816188       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:06:50.816351       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:06:50.816474       1 server.go:661] "Version info" version="v1.24.3"
	I0728 22:06:50.816501       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:06:50.816933       1 config.go:317] "Starting service config controller"
	I0728 22:06:50.816966       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 22:06:50.816979       1 config.go:226] "Starting endpoint slice config controller"
	I0728 22:06:50.817224       1 config.go:444] "Starting node config controller"
	I0728 22:06:50.817231       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 22:06:50.817708       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 22:06:50.917373       1 shared_informer.go:262] Caches are synced for node config
	I0728 22:06:50.917811       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 22:06:50.917832       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [9db2ba48d7a6] <==
	* E0728 22:06:34.413881       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0728 22:06:34.414002       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0728 22:06:34.414374       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0728 22:06:34.414629       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0728 22:06:34.414659       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0728 22:06:34.414710       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0728 22:06:34.414783       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0728 22:06:34.415025       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0728 22:06:34.415056       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0728 22:06:35.244152       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0728 22:06:35.244187       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0728 22:06:35.254019       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0728 22:06:35.254040       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0728 22:06:35.264742       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0728 22:06:35.264882       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0728 22:06:35.345399       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0728 22:06:35.345476       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0728 22:06:35.361653       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0728 22:06:35.361690       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0728 22:06:35.407232       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0728 22:06:35.407282       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0728 22:06:36.008697       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 22:09:39.505515       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0728 22:09:39.505699       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0728 22:09:39.505966       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kube-scheduler [c0df34a7cab2] <==
	* I0728 22:10:27.172222       1 serving.go:348] Generated self-signed cert in-memory
	I0728 22:10:29.388676       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0728 22:10:29.388709       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:10:29.391253       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0728 22:10:29.391299       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0728 22:10:29.391332       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0728 22:10:29.391337       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 22:10:29.391348       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0728 22:10:29.391351       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0728 22:10:29.391452       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0728 22:10:29.391983       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0728 22:10:29.492159       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0728 22:10:29.492541       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 22:10:29.492562       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 22:10:17 UTC, end at Thu 2022-07-28 22:13:38 UTC. --
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.466815    1153 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.505883    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgf6f\" (UniqueName: \"kubernetes.io/projected/f9727653-ed51-43f3-95ad-fd2f5fb0ac6e-kube-api-access-bgf6f\") pod \"kube-proxy-cdz7z\" (UID: \"f9727653-ed51-43f3-95ad-fd2f5fb0ac6e\") " pod="kube-system/kube-proxy-cdz7z"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.505938    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b535556f-fbbe-4220-9037-5016f1b8fb51-xtables-lock\") pod \"kindnet-tlp2m\" (UID: \"b535556f-fbbe-4220-9037-5016f1b8fb51\") " pod="kube-system/kindnet-tlp2m"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.505955    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b535556f-fbbe-4220-9037-5016f1b8fb51-lib-modules\") pod \"kindnet-tlp2m\" (UID: \"b535556f-fbbe-4220-9037-5016f1b8fb51\") " pod="kube-system/kindnet-tlp2m"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.505973    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9727653-ed51-43f3-95ad-fd2f5fb0ac6e-xtables-lock\") pod \"kube-proxy-cdz7z\" (UID: \"f9727653-ed51-43f3-95ad-fd2f5fb0ac6e\") " pod="kube-system/kube-proxy-cdz7z"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.505989    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9727653-ed51-43f3-95ad-fd2f5fb0ac6e-lib-modules\") pod \"kube-proxy-cdz7z\" (UID: \"f9727653-ed51-43f3-95ad-fd2f5fb0ac6e\") " pod="kube-system/kube-proxy-cdz7z"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.506006    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b535556f-fbbe-4220-9037-5016f1b8fb51-cni-cfg\") pod \"kindnet-tlp2m\" (UID: \"b535556f-fbbe-4220-9037-5016f1b8fb51\") " pod="kube-system/kindnet-tlp2m"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.506028    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8q7k\" (UniqueName: \"kubernetes.io/projected/ea8a6018-c281-45ec-bbb7-19f2988aa884-kube-api-access-l8q7k\") pod \"coredns-6d4b75cb6d-dfxk7\" (UID: \"ea8a6018-c281-45ec-bbb7-19f2988aa884\") " pod="kube-system/coredns-6d4b75cb6d-dfxk7"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.506043    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9727653-ed51-43f3-95ad-fd2f5fb0ac6e-kube-proxy\") pod \"kube-proxy-cdz7z\" (UID: \"f9727653-ed51-43f3-95ad-fd2f5fb0ac6e\") " pod="kube-system/kube-proxy-cdz7z"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.506068    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/29238934-2c0b-4262-80ff-12975d44a715-tmp\") pod \"storage-provisioner\" (UID: \"29238934-2c0b-4262-80ff-12975d44a715\") " pod="kube-system/storage-provisioner"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.506082    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jtmd\" (UniqueName: \"kubernetes.io/projected/b535556f-fbbe-4220-9037-5016f1b8fb51-kube-api-access-5jtmd\") pod \"kindnet-tlp2m\" (UID: \"b535556f-fbbe-4220-9037-5016f1b8fb51\") " pod="kube-system/kindnet-tlp2m"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.506100    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcp7w\" (UniqueName: \"kubernetes.io/projected/29238934-2c0b-4262-80ff-12975d44a715-kube-api-access-pcp7w\") pod \"storage-provisioner\" (UID: \"29238934-2c0b-4262-80ff-12975d44a715\") " pod="kube-system/storage-provisioner"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.506165    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lbt8\" (UniqueName: \"kubernetes.io/projected/d9fb3157-8740-4451-a744-642ae7c70cd7-kube-api-access-5lbt8\") pod \"busybox-d46db594c-jwp7z\" (UID: \"d9fb3157-8740-4451-a744-642ae7c70cd7\") " pod="default/busybox-d46db594c-jwp7z"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.506191    1153 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea8a6018-c281-45ec-bbb7-19f2988aa884-config-volume\") pod \"coredns-6d4b75cb6d-dfxk7\" (UID: \"ea8a6018-c281-45ec-bbb7-19f2988aa884\") " pod="kube-system/coredns-6d4b75cb6d-dfxk7"
	Jul 28 22:10:29 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:29.506203    1153 reconciler.go:157] "Reconciler: start to sync state"
	Jul 28 22:10:30 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:30.063672    1153 kubelet_node_status.go:108] "Node was previously registered" node="multinode-20220728150610-12923"
	Jul 28 22:10:30 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:30.063807    1153 kubelet_node_status.go:73] "Successfully registered node" node="multinode-20220728150610-12923"
	Jul 28 22:10:30 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:30.659239    1153 request.go:601] Waited for 1.050375929s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/default/serviceaccounts/default/token
	Jul 28 22:10:31 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:31.923772    1153 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="39c7425173aff2d27eaf6a7508b2e703782b9bd3452d9dbff575f03e8e0d8ce4"
	Jul 28 22:10:32 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:32.943745    1153 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
	Jul 28 22:10:36 multinode-20220728150610-12923 kubelet[1153]: I0728 22:10:36.356777    1153 prober_manager.go:274] "Failed to trigger a manual run" probe="Readiness"
	Jul 28 22:11:02 multinode-20220728150610-12923 kubelet[1153]: I0728 22:11:02.140424    1153 scope.go:110] "RemoveContainer" containerID="765d6b79e654dd6e4bb8de6dea8b70c0adf3dcf9fd7de35b532f87cc396e6625"
	Jul 28 22:11:02 multinode-20220728150610-12923 kubelet[1153]: I0728 22:11:02.140634    1153 scope.go:110] "RemoveContainer" containerID="3d78554b9e6703f2b58b68a7c598c8b22f762926942e815ddb25439051836bd6"
	Jul 28 22:11:02 multinode-20220728150610-12923 kubelet[1153]: E0728 22:11:02.140777    1153 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(29238934-2c0b-4262-80ff-12975d44a715)\"" pod="kube-system/storage-provisioner" podUID=29238934-2c0b-4262-80ff-12975d44a715
	Jul 28 22:11:15 multinode-20220728150610-12923 kubelet[1153]: I0728 22:11:15.551132    1153 scope.go:110] "RemoveContainer" containerID="3d78554b9e6703f2b58b68a7c598c8b22f762926942e815ddb25439051836bd6"
	
	* 
	* ==> storage-provisioner [3d78554b9e67] <==
	* I0728 22:10:31.877671       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0728 22:11:01.860206       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [dc01b2e9e94d] <==
	* I0728 22:11:15.631257       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 22:11:15.639835       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 22:11:15.639880       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 22:11:33.034436       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 22:11:33.034520       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"72e54483-90e3-4dd8-af57-8256d0562c5a", APIVersion:"v1", ResourceVersion:"889", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-20220728150610-12923_c313d777-b649-49b4-8afe-f964f7a0d72d became leader
	I0728 22:11:33.034584       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-20220728150610-12923_c313d777-b649-49b4-8afe-f964f7a0d72d!
	I0728 22:11:33.135581       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-20220728150610-12923_c313d777-b649-49b4-8afe-f964f7a0d72d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-20220728150610-12923 -n multinode-20220728150610-12923
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-20220728150610-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context multinode-20220728150610-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.465375145s)
helpers_test.go:270: non-running pods: busybox-d46db594c-lb4v6
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-20220728150610-12923 describe pod busybox-d46db594c-lb4v6
helpers_test.go:280: (dbg) kubectl --context multinode-20220728150610-12923 describe pod busybox-d46db594c-lb4v6:

                                                
                                                
-- stdout --
	Name:           busybox-d46db594c-lb4v6
	Namespace:      default
	Priority:       0
	Node:           multinode-20220728150610-12923-m03/
	Labels:         app=busybox
	                pod-template-hash=d46db594c
	Annotations:    <none>
	Status:         Pending
	IP:             
	IPs:            <none>
	Controlled By:  ReplicaSet/busybox-d46db594c
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j8pxg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-j8pxg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m28s  default-scheduler  Successfully assigned default/busybox-d46db594c-lb4v6 to multinode-20220728150610-12923-m03

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (242.65s)

                                                
                                    
x
+
TestPreload (264.32s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220728151611-12923 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0728 15:17:13.981333   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 15:17:37.933850   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 15:20:17.034334   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
preload_test.go:48: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p test-preload-20220728151611-12923 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: exit status 109 (4m21.289256577s)

                                                
                                                
-- stdout --
	* [test-preload-20220728151611-12923] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node test-preload-20220728151611-12923 in cluster test-preload-20220728151611-12923
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 15:16:11.693197   21847 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:16:11.693416   21847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:16:11.693421   21847 out.go:309] Setting ErrFile to fd 2...
	I0728 15:16:11.693425   21847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:16:11.693561   21847 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:16:11.694076   21847 out.go:303] Setting JSON to false
	I0728 15:16:11.710037   21847 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":7613,"bootTime":1659038958,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:16:11.710133   21847 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:16:11.734074   21847 out.go:177] * [test-preload-20220728151611-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:16:11.777296   21847 notify.go:193] Checking for updates...
	I0728 15:16:11.801787   21847 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:16:11.845016   21847 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:16:11.888796   21847 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:16:11.931735   21847 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:16:11.975893   21847 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:16:11.998394   21847 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:16:12.067756   21847 docker.go:137] docker version: linux-20.10.17
	I0728 15:16:12.067901   21847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:16:12.200965   21847 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-28 22:16:12.142047164 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:16:12.223276   21847 out.go:177] * Using the docker driver based on user configuration
	I0728 15:16:12.245321   21847 start.go:284] selected driver: docker
	I0728 15:16:12.245350   21847 start.go:808] validating driver "docker" against <nil>
	I0728 15:16:12.245416   21847 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:16:12.248872   21847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:16:12.379504   21847 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:46 SystemTime:2022-07-28 22:16:12.323866768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:16:12.379620   21847 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0728 15:16:12.379764   21847 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:16:12.406061   21847 out.go:177] * Using Docker Desktop driver with root privileges
	I0728 15:16:12.428833   21847 cni.go:95] Creating CNI manager for ""
	I0728 15:16:12.428864   21847 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:16:12.428880   21847 start_flags.go:310] config:
	{Name:test-preload-20220728151611-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220728151611-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:16:12.451110   21847 out.go:177] * Starting control plane node test-preload-20220728151611-12923 in cluster test-preload-20220728151611-12923
	I0728 15:16:12.494947   21847 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:16:12.517099   21847 out.go:177] * Pulling base image ...
	I0728 15:16:12.559864   21847 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0728 15:16:12.559865   21847 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:16:12.562831   21847 cache.go:107] acquiring lock: {Name:mk354870b349b717e8fe8bff6741e077397d6f7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:16:12.564165   21847 cache.go:107] acquiring lock: {Name:mk85d63c42e955d117778a63235193dba8544eb6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:16:12.564183   21847 cache.go:107] acquiring lock: {Name:mk371ff6c3fd8781a9f966f9fa274719e0b1108c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:16:12.564205   21847 cache.go:107] acquiring lock: {Name:mkc48bea862774e0bdb788ba1a6c00ecbeb768de Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:16:12.564207   21847 cache.go:107] acquiring lock: {Name:mk4cfa58941623462941014d80d5787a0643aa15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:16:12.564215   21847 cache.go:107] acquiring lock: {Name:mk8b4d64a2e5897a321c726a01d1ae87679ada1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:16:12.564175   21847 cache.go:107] acquiring lock: {Name:mk26059bcb696bce608ccdb6e7f65ecc5648af55 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:16:12.564184   21847 cache.go:107] acquiring lock: {Name:mk9ca3c60e9f7652dc38dce9bc894a01d93d89bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:16:12.564329   21847 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0728 15:16:12.564344   21847 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0728 15:16:12.564348   21847 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 181.394µs
	I0728 15:16:12.564371   21847 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0728 15:16:12.564461   21847 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0728 15:16:12.565002   21847 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0728 15:16:12.565011   21847 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0728 15:16:12.565039   21847 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0728 15:16:12.565081   21847 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0728 15:16:12.565105   21847 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/config.json ...
	I0728 15:16:12.565135   21847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/config.json: {Name:mk0c8d131b34b8ec7f7369eb17c831c61ae360c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:16:12.565154   21847 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0728 15:16:12.579852   21847 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0728 15:16:12.579852   21847 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0728 15:16:12.580024   21847 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0728 15:16:12.580226   21847 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0728 15:16:12.581045   21847 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0728 15:16:12.581567   21847 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0728 15:16:12.581631   21847 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0728 15:16:12.627344   21847 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:16:12.627365   21847 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:16:12.627380   21847 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:16:12.627451   21847 start.go:370] acquiring machines lock for test-preload-20220728151611-12923: {Name:mk2efb8ddbf144fc7a9fb53776e8045b6ba9a55e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:16:12.627627   21847 start.go:374] acquired machines lock for "test-preload-20220728151611-12923" in 163.318µs
	I0728 15:16:12.627653   21847 start.go:92] Provisioning new machine with config: &{Name:test-preload-20220728151611-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220728151611-12923 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 15:16:12.627754   21847 start.go:132] createHost starting for "" (driver="docker")
	I0728 15:16:12.670258   21847 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0728 15:16:12.670575   21847 start.go:166] libmachine.API.Create for "test-preload-20220728151611-12923" (driver="docker")
	I0728 15:16:12.670611   21847 client.go:168] LocalClient.Create starting
	I0728 15:16:12.670697   21847 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem
	I0728 15:16:12.670733   21847 main.go:134] libmachine: Decoding PEM data...
	I0728 15:16:12.670747   21847 main.go:134] libmachine: Parsing certificate...
	I0728 15:16:12.670818   21847 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem
	I0728 15:16:12.670843   21847 main.go:134] libmachine: Decoding PEM data...
	I0728 15:16:12.670854   21847 main.go:134] libmachine: Parsing certificate...
	I0728 15:16:12.671301   21847 cli_runner.go:164] Run: docker network inspect test-preload-20220728151611-12923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0728 15:16:12.732753   21847 cli_runner.go:211] docker network inspect test-preload-20220728151611-12923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0728 15:16:12.732860   21847 network_create.go:272] running [docker network inspect test-preload-20220728151611-12923] to gather additional debugging logs...
	I0728 15:16:12.732881   21847 cli_runner.go:164] Run: docker network inspect test-preload-20220728151611-12923
	W0728 15:16:12.793815   21847 cli_runner.go:211] docker network inspect test-preload-20220728151611-12923 returned with exit code 1
	I0728 15:16:12.793839   21847 network_create.go:275] error running [docker network inspect test-preload-20220728151611-12923]: docker network inspect test-preload-20220728151611-12923: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: test-preload-20220728151611-12923
	I0728 15:16:12.793849   21847 network_create.go:277] output of [docker network inspect test-preload-20220728151611-12923]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: test-preload-20220728151611-12923
	
	** /stderr **
	I0728 15:16:12.793933   21847 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0728 15:16:12.856397   21847 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000750128] misses:0}
	I0728 15:16:12.856492   21847 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:16:12.856548   21847 network_create.go:115] attempt to create docker network test-preload-20220728151611-12923 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0728 15:16:12.856654   21847 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220728151611-12923 test-preload-20220728151611-12923
	W0728 15:16:12.918030   21847 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220728151611-12923 test-preload-20220728151611-12923 returned with exit code 1
	W0728 15:16:12.918064   21847 network_create.go:107] failed to create docker network test-preload-20220728151611-12923 192.168.49.0/24, will retry: subnet is taken
	I0728 15:16:12.918298   21847 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000750128] amended:false}} dirty:map[] misses:0}
	I0728 15:16:12.918311   21847 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:16:12.918510   21847 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000750128] amended:true}} dirty:map[192.168.49.0:0xc000750128 192.168.58.0:0xc000c54020] misses:0}
	I0728 15:16:12.918522   21847 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:16:12.918528   21847 network_create.go:115] attempt to create docker network test-preload-20220728151611-12923 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0728 15:16:12.918582   21847 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220728151611-12923 test-preload-20220728151611-12923
	W0728 15:16:12.979159   21847 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220728151611-12923 test-preload-20220728151611-12923 returned with exit code 1
	W0728 15:16:12.979185   21847 network_create.go:107] failed to create docker network test-preload-20220728151611-12923 192.168.58.0/24, will retry: subnet is taken
	I0728 15:16:12.979435   21847 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000750128] amended:true}} dirty:map[192.168.49.0:0xc000750128 192.168.58.0:0xc000c54020] misses:1}
	I0728 15:16:12.979452   21847 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:16:12.979653   21847 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000750128] amended:true}} dirty:map[192.168.49.0:0xc000750128 192.168.58.0:0xc000c54020 192.168.67.0:0xc000c54058] misses:1}
	I0728 15:16:12.979666   21847 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:16:12.979676   21847 network_create.go:115] attempt to create docker network test-preload-20220728151611-12923 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0728 15:16:12.979727   21847 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=test-preload-20220728151611-12923 test-preload-20220728151611-12923
	I0728 15:16:13.072266   21847 network_create.go:99] docker network test-preload-20220728151611-12923 192.168.67.0/24 created
	I0728 15:16:13.072300   21847 kic.go:106] calculated static IP "192.168.67.2" for the "test-preload-20220728151611-12923" container
	I0728 15:16:13.072382   21847 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0728 15:16:13.133942   21847 cli_runner.go:164] Run: docker volume create test-preload-20220728151611-12923 --label name.minikube.sigs.k8s.io=test-preload-20220728151611-12923 --label created_by.minikube.sigs.k8s.io=true
	I0728 15:16:13.195443   21847 oci.go:103] Successfully created a docker volume test-preload-20220728151611-12923
	I0728 15:16:13.195511   21847 cli_runner.go:164] Run: docker run --rm --name test-preload-20220728151611-12923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220728151611-12923 --entrypoint /usr/bin/test -v test-preload-20220728151611-12923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0728 15:16:13.356260   21847 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0728 15:16:13.379136   21847 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0728 15:16:13.390407   21847 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0728 15:16:13.392651   21847 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0728 15:16:13.400133   21847 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0728 15:16:13.474455   21847 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0728 15:16:13.474585   21847 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0728 15:16:13.474600   21847 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 910.460981ms
	I0728 15:16:13.474609   21847 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0728 15:16:13.478183   21847 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0728 15:16:13.630234   21847 oci.go:107] Successfully prepared a docker volume test-preload-20220728151611-12923
	I0728 15:16:13.630263   21847 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0728 15:16:13.630381   21847 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0728 15:16:13.763571   21847 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname test-preload-20220728151611-12923 --name test-preload-20220728151611-12923 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=test-preload-20220728151611-12923 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=test-preload-20220728151611-12923 --network test-preload-20220728151611-12923 --ip 192.168.67.2 --volume test-preload-20220728151611-12923:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0728 15:16:14.146336   21847 cli_runner.go:164] Run: docker container inspect test-preload-20220728151611-12923 --format={{.State.Running}}
	I0728 15:16:14.214035   21847 cli_runner.go:164] Run: docker container inspect test-preload-20220728151611-12923 --format={{.State.Status}}
	I0728 15:16:14.284936   21847 cli_runner.go:164] Run: docker exec test-preload-20220728151611-12923 stat /var/lib/dpkg/alternatives/iptables
	I0728 15:16:14.367571   21847 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 exists
	I0728 15:16:14.367593   21847 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5" took 1.803459696s
	I0728 15:16:14.367627   21847 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 succeeded
	I0728 15:16:14.399720   21847 oci.go:144] the created container "test-preload-20220728151611-12923" has a running status.
	I0728 15:16:14.399757   21847 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/test-preload-20220728151611-12923/id_rsa...
	I0728 15:16:14.507452   21847 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/test-preload-20220728151611-12923/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0728 15:16:14.624089   21847 cli_runner.go:164] Run: docker container inspect test-preload-20220728151611-12923 --format={{.State.Status}}
	I0728 15:16:14.688362   21847 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0728 15:16:14.688376   21847 kic_runner.go:114] Args: [docker exec --privileged test-preload-20220728151611-12923 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0728 15:16:14.801966   21847 cli_runner.go:164] Run: docker container inspect test-preload-20220728151611-12923 --format={{.State.Status}}
	I0728 15:16:14.865858   21847 machine.go:88] provisioning docker machine ...
	I0728 15:16:14.865901   21847 ubuntu.go:169] provisioning hostname "test-preload-20220728151611-12923"
	I0728 15:16:14.866013   21847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220728151611-12923
	I0728 15:16:14.930758   21847 main.go:134] libmachine: Using SSH client type: native
	I0728 15:16:14.930979   21847 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56810 <nil> <nil>}
	I0728 15:16:14.930996   21847 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-20220728151611-12923 && echo "test-preload-20220728151611-12923" | sudo tee /etc/hostname
	I0728 15:16:15.055599   21847 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-20220728151611-12923
	
	I0728 15:16:15.055695   21847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220728151611-12923
	I0728 15:16:15.121480   21847 main.go:134] libmachine: Using SSH client type: native
	I0728 15:16:15.121662   21847 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56810 <nil> <nil>}
	I0728 15:16:15.121679   21847 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-20220728151611-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-20220728151611-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-20220728151611-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:16:15.243878   21847 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:16:15.243898   21847 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:16:15.243922   21847 ubuntu.go:177] setting up certificates
	I0728 15:16:15.243933   21847 provision.go:83] configureAuth start
	I0728 15:16:15.244004   21847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220728151611-12923
	I0728 15:16:15.309186   21847 provision.go:138] copyHostCerts
	I0728 15:16:15.309268   21847 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:16:15.309280   21847 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:16:15.309371   21847 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:16:15.309555   21847 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:16:15.309576   21847 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:16:15.309635   21847 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:16:15.309779   21847 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:16:15.309785   21847 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:16:15.309838   21847 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:16:15.309949   21847 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.test-preload-20220728151611-12923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube test-preload-20220728151611-12923]
	I0728 15:16:15.347894   21847 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 exists
	I0728 15:16:15.347917   21847 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0" took 2.783741031s
	I0728 15:16:15.347934   21847 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 succeeded
	I0728 15:16:15.516965   21847 provision.go:172] copyRemoteCerts
	I0728 15:16:15.517022   21847 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:16:15.517066   21847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220728151611-12923
	I0728 15:16:15.580131   21847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56810 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/test-preload-20220728151611-12923/id_rsa Username:docker}
	I0728 15:16:15.665542   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:16:15.682637   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0728 15:16:15.689691   21847 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 exists
	I0728 15:16:15.689707   21847 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0" took 3.125624678s
	I0728 15:16:15.689720   21847 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 succeeded
	I0728 15:16:15.699723   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 15:16:15.712775   21847 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 exists
	I0728 15:16:15.712794   21847 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0" took 3.150057407s
	I0728 15:16:15.712819   21847 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 succeeded
	I0728 15:16:15.717066   21847 provision.go:86] duration metric: configureAuth took 473.125532ms
	I0728 15:16:15.717075   21847 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:16:15.717207   21847 config.go:178] Loaded profile config "test-preload-20220728151611-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0728 15:16:15.717275   21847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220728151611-12923
	I0728 15:16:15.781129   21847 main.go:134] libmachine: Using SSH client type: native
	I0728 15:16:15.781271   21847 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56810 <nil> <nil>}
	I0728 15:16:15.781281   21847 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:16:15.899339   21847 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:16:15.899352   21847 ubuntu.go:71] root file system type: overlay
	I0728 15:16:15.899508   21847 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:16:15.899583   21847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220728151611-12923
	I0728 15:16:15.963147   21847 main.go:134] libmachine: Using SSH client type: native
	I0728 15:16:15.963293   21847 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56810 <nil> <nil>}
	I0728 15:16:15.963350   21847 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:16:16.088479   21847 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:16:16.088570   21847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220728151611-12923
	I0728 15:16:16.151623   21847 main.go:134] libmachine: Using SSH client type: native
	I0728 15:16:16.151782   21847 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 56810 <nil> <nil>}
	I0728 15:16:16.151795   21847 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:16:16.220112   21847 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 exists
	I0728 15:16:16.220130   21847 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0" took 3.65605137s
	I0728 15:16:16.220138   21847 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0728 15:16:16.497392   21847 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 exists
	I0728 15:16:16.497412   21847 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0" took 3.933589246s
	I0728 15:16:16.497421   21847 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 succeeded
	I0728 15:16:16.497435   21847 cache.go:87] Successfully saved all images to host disk.
	I0728 15:16:16.757457   21847 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 22:16:16.106202218 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0728 15:16:16.757481   21847 machine.go:91] provisioned docker machine in 1.891622772s
	I0728 15:16:16.757488   21847 client.go:171] LocalClient.Create took 4.086915797s
	I0728 15:16:16.757507   21847 start.go:174] duration metric: libmachine.API.Create for "test-preload-20220728151611-12923" took 4.086973976s
	I0728 15:16:16.757518   21847 start.go:307] post-start starting for "test-preload-20220728151611-12923" (driver="docker")
	I0728 15:16:16.757523   21847 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:16:16.757585   21847 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:16:16.757631   21847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220728151611-12923
	I0728 15:16:16.821590   21847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56810 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/test-preload-20220728151611-12923/id_rsa Username:docker}
	I0728 15:16:16.910253   21847 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:16:16.913651   21847 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:16:16.913668   21847 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:16:16.913681   21847 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:16:16.913687   21847 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:16:16.913697   21847 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:16:16.913803   21847 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:16:16.913949   21847 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:16:16.914112   21847 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:16:16.921032   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:16:16.937555   21847 start.go:310] post-start completed in 180.027053ms
	I0728 15:16:16.938082   21847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220728151611-12923
	I0728 15:16:17.001795   21847 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/config.json ...
	I0728 15:16:17.002231   21847 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:16:17.002278   21847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220728151611-12923
	I0728 15:16:17.065174   21847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56810 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/test-preload-20220728151611-12923/id_rsa Username:docker}
	I0728 15:16:17.150762   21847 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:16:17.154923   21847 start.go:135] duration metric: createHost completed in 4.527208472s
	I0728 15:16:17.154939   21847 start.go:82] releasing machines lock for "test-preload-20220728151611-12923", held for 4.527351834s
	I0728 15:16:17.155007   21847 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" test-preload-20220728151611-12923
	I0728 15:16:17.217736   21847 ssh_runner.go:195] Run: systemctl --version
	I0728 15:16:17.217765   21847 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:16:17.217840   21847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220728151611-12923
	I0728 15:16:17.217842   21847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" test-preload-20220728151611-12923
	I0728 15:16:17.283781   21847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56810 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/test-preload-20220728151611-12923/id_rsa Username:docker}
	I0728 15:16:17.283771   21847 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56810 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/test-preload-20220728151611-12923/id_rsa Username:docker}
	I0728 15:16:17.563485   21847 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:16:17.573942   21847 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:16:17.573995   21847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:16:17.582840   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:16:17.595128   21847 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:16:17.656789   21847 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:16:17.718586   21847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:16:17.780248   21847 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:16:17.978423   21847 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:16:18.013975   21847 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:16:18.070975   21847 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 20.10.17 ...
	I0728 15:16:18.071089   21847 cli_runner.go:164] Run: docker exec -t test-preload-20220728151611-12923 dig +short host.docker.internal
	I0728 15:16:18.200438   21847 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:16:18.200535   21847 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:16:18.204888   21847 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:16:18.214290   21847 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" test-preload-20220728151611-12923
	I0728 15:16:18.278116   21847 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0728 15:16:18.278176   21847 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:16:18.306695   21847 docker.go:611] Got preloaded images: 
	I0728 15:16:18.306707   21847 docker.go:617] k8s.gcr.io/kube-apiserver:v1.17.0 wasn't preloaded
	I0728 15:16:18.306714   21847 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.17.0 k8s.gcr.io/kube-controller-manager:v1.17.0 k8s.gcr.io/kube-scheduler:v1.17.0 k8s.gcr.io/kube-proxy:v1.17.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0728 15:16:18.314283   21847 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0728 15:16:18.315221   21847 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0728 15:16:18.315621   21847 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:16:18.316382   21847 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0728 15:16:18.317747   21847 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.5
	I0728 15:16:18.318023   21847 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
	I0728 15:16:18.318107   21847 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0728 15:16:18.318387   21847 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0728 15:16:18.323669   21847 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error: No such image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0728 15:16:18.323799   21847 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error: No such image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0728 15:16:18.324467   21847 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:16:18.325634   21847 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error: No such image: k8s.gcr.io/kube-proxy:v1.17.0
	I0728 15:16:18.326084   21847 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error: No such image: k8s.gcr.io/etcd:3.4.3-0
	I0728 15:16:18.326339   21847 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error: No such image: k8s.gcr.io/coredns:1.6.5
	I0728 15:16:18.326468   21847 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error: No such image: k8s.gcr.io/pause:3.1
	I0728 15:16:18.327044   21847 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error: No such image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0728 15:16:18.888642   21847 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.17.0
	I0728 15:16:18.919356   21847 cache_images.go:116] "k8s.gcr.io/kube-apiserver:v1.17.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I0728 15:16:18.919398   21847 docker.go:292] Removing image: k8s.gcr.io/kube-apiserver:v1.17.0
	I0728 15:16:18.919452   21847 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-apiserver:v1.17.0
	I0728 15:16:18.938403   21847 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.17.0
	I0728 15:16:18.949103   21847 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0
	I0728 15:16:18.949229   21847 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0728 15:16:18.971985   21847 cache_images.go:116] "k8s.gcr.io/kube-controller-manager:v1.17.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I0728 15:16:18.972000   21847 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.17.0': No such file or directory
	I0728 15:16:18.972009   21847 docker.go:292] Removing image: k8s.gcr.io/kube-controller-manager:v1.17.0
	I0728 15:16:18.972029   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I0728 15:16:18.972052   21847 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-controller-manager:v1.17.0
	I0728 15:16:19.003094   21847 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.17.0
	I0728 15:16:19.011749   21847 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0
	I0728 15:16:19.011872   21847 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0728 15:16:19.040706   21847 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.17.0': No such file or directory
	I0728 15:16:19.040743   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I0728 15:16:19.072375   21847 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.5
	I0728 15:16:19.074694   21847 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0728 15:16:19.078520   21847 cache_images.go:116] "k8s.gcr.io/kube-proxy:v1.17.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I0728 15:16:19.078548   21847 docker.go:292] Removing image: k8s.gcr.io/kube-proxy:v1.17.0
	I0728 15:16:19.078604   21847 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-proxy:v1.17.0
	I0728 15:16:19.151648   21847 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.17.0
	I0728 15:16:19.155359   21847 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:16:19.158559   21847 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.1
	I0728 15:16:19.165135   21847 cache_images.go:116] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0728 15:16:19.165179   21847 cache_images.go:116] "k8s.gcr.io/coredns:1.6.5" needs transfer: "k8s.gcr.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I0728 15:16:19.165180   21847 docker.go:292] Removing image: k8s.gcr.io/etcd:3.4.3-0
	I0728 15:16:19.165200   21847 docker.go:292] Removing image: k8s.gcr.io/coredns:1.6.5
	I0728 15:16:19.165256   21847 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/coredns:1.6.5
	I0728 15:16:19.165261   21847 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/etcd:3.4.3-0
	I0728 15:16:19.183018   21847 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0
	I0728 15:16:19.183170   21847 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I0728 15:16:19.261222   21847 cache_images.go:116] "k8s.gcr.io/kube-scheduler:v1.17.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I0728 15:16:19.261250   21847 docker.go:292] Removing image: k8s.gcr.io/kube-scheduler:v1.17.0
	I0728 15:16:19.261311   21847 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/kube-scheduler:v1.17.0
	I0728 15:16:19.264593   21847 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0728 15:16:19.264634   21847 docker.go:292] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:16:19.264696   21847 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:16:19.310852   21847 cache_images.go:116] "k8s.gcr.io/pause:3.1" needs transfer: "k8s.gcr.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0728 15:16:19.310879   21847 docker.go:292] Removing image: k8s.gcr.io/pause:3.1
	I0728 15:16:19.310949   21847 ssh_runner.go:195] Run: docker rmi k8s.gcr.io/pause:3.1
	I0728 15:16:19.311900   21847 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0
	I0728 15:16:19.312015   21847 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I0728 15:16:19.315358   21847 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.17.0': No such file or directory
	I0728 15:16:19.315415   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I0728 15:16:19.318520   21847 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5
	I0728 15:16:19.318624   21847 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I0728 15:16:19.383837   21847 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0728 15:16:19.383840   21847 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0
	I0728 15:16:19.383969   21847 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0728 15:16:19.384048   21847 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0728 15:16:19.428950   21847 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0728 15:16:19.429003   21847 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.4.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.3-0': No such file or directory
	I0728 15:16:19.429023   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0728 15:16:19.429044   21847 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_1.6.5: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.6.5': No such file or directory
	I0728 15:16:19.429065   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I0728 15:16:19.429127   21847 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I0728 15:16:19.446616   21847 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0728 15:16:19.446621   21847 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.17.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.17.0': No such file or directory
	I0728 15:16:19.446654   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0728 15:16:19.446654   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I0728 15:16:19.492109   21847 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.1': No such file or directory
	I0728 15:16:19.492138   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I0728 15:16:19.638749   21847 docker.go:259] Loading image: /var/lib/minikube/images/pause_3.1
	I0728 15:16:19.638764   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I0728 15:16:19.892348   21847 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 from cache
	I0728 15:16:20.427023   21847 docker.go:259] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0728 15:16:20.427039   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0728 15:16:21.010073   21847 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0728 15:16:21.010112   21847 docker.go:259] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I0728 15:16:21.010132   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I0728 15:16:21.836654   21847 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.5 from cache
	I0728 15:16:21.927323   21847 docker.go:259] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I0728 15:16:21.927358   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I0728 15:16:24.504100   21847 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load": (2.576751907s)
	I0728 15:16:24.504116   21847 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.17.0 from cache
	I0728 15:16:24.504144   21847 docker.go:259] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I0728 15:16:24.504156   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I0728 15:16:24.973869   21847 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.17.0 from cache
	I0728 15:16:24.973891   21847 docker.go:259] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I0728 15:16:24.973905   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I0728 15:16:25.987652   21847 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load": (1.013742742s)
	I0728 15:16:25.987665   21847 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.17.0 from cache
	I0728 15:16:25.987694   21847 docker.go:259] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I0728 15:16:25.987705   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I0728 15:16:26.921771   21847 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.17.0 from cache
	I0728 15:16:26.921796   21847 docker.go:259] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0728 15:16:26.921817   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I0728 15:16:29.768178   21847 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load": (2.846374818s)
	I0728 15:16:29.768193   21847 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.4.3-0 from cache
	I0728 15:16:29.768220   21847 cache_images.go:123] Successfully loaded all cached images
	I0728 15:16:29.768225   21847 cache_images.go:92] LoadImages completed in 11.461624161s
	I0728 15:16:29.768307   21847 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:16:29.847119   21847 cni.go:95] Creating CNI manager for ""
	I0728 15:16:29.847131   21847 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:16:29.847148   21847 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:16:29.847158   21847 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-20220728151611-12923 NodeName:test-preload-20220728151611-12923 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:16:29.847254   21847 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "test-preload-20220728151611-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:16:29.847312   21847 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=test-preload-20220728151611-12923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220728151611-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:16:29.847365   21847 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I0728 15:16:29.854959   21847 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.17.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.17.0': No such file or directory
	
	Initiating transfer...
	I0728 15:16:29.855007   21847 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I0728 15:16:29.862351   21847 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubelet.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/linux/amd64/v1.17.0/kubelet
	I0728 15:16:29.862355   21847 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/linux/amd64/v1.17.0/kubectl
	I0728 15:16:29.862353   21847 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubeadm.sha256 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/linux/amd64/v1.17.0/kubeadm
	I0728 15:16:31.149733   21847 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I0728 15:16:31.154250   21847 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubeadm': No such file or directory
	I0728 15:16:31.154273   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I0728 15:16:31.604760   21847 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I0728 15:16:31.924608   21847 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I0728 15:16:31.924640   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I0728 15:16:32.227663   21847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:16:32.296643   21847 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I0728 15:16:32.358821   21847 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubelet': No such file or directory
	I0728 15:16:32.358853   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I0728 15:16:34.341319   21847 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:16:34.348516   21847 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0728 15:16:34.361742   21847 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:16:34.375219   21847 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I0728 15:16:34.387558   21847 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:16:34.391397   21847 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:16:34.401870   21847 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923 for IP: 192.168.67.2
	I0728 15:16:34.401973   21847 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:16:34.402021   21847 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:16:34.402062   21847 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/client.key
	I0728 15:16:34.402074   21847 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/client.crt with IP's: []
	I0728 15:16:34.456383   21847 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/client.crt ...
	I0728 15:16:34.456392   21847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/client.crt: {Name:mk46e716f3df8f13c423067b0ca437c5166a21e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:16:34.456679   21847 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/client.key ...
	I0728 15:16:34.456687   21847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/client.key: {Name:mk8697d7dd4a269c52810d84610e9a6fc2e83efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:16:34.456915   21847 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/apiserver.key.c7fa3a9e
	I0728 15:16:34.456929   21847 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0728 15:16:34.521507   21847 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/apiserver.crt.c7fa3a9e ...
	I0728 15:16:34.521515   21847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/apiserver.crt.c7fa3a9e: {Name:mkde4336eac37ec6633a07c8ce532ec22db7468c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:16:34.521720   21847 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/apiserver.key.c7fa3a9e ...
	I0728 15:16:34.521727   21847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/apiserver.key.c7fa3a9e: {Name:mk2bdbfd4dddc8117f299a9908598c7016902d10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:16:34.521897   21847 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/apiserver.crt
	I0728 15:16:34.522214   21847 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/apiserver.key
	I0728 15:16:34.522385   21847 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/proxy-client.key
	I0728 15:16:34.522400   21847 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/proxy-client.crt with IP's: []
	I0728 15:16:34.638877   21847 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/proxy-client.crt ...
	I0728 15:16:34.638887   21847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/proxy-client.crt: {Name:mke76c772680b4788367cfc70b915b9830fdfa3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:16:34.639106   21847 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/proxy-client.key ...
	I0728 15:16:34.639113   21847 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/proxy-client.key: {Name:mkd9e2f847b9c86e0c362260ab5e8453c673904f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:16:34.639447   21847 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:16:34.639481   21847 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:16:34.639489   21847 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:16:34.639517   21847 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:16:34.639545   21847 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:16:34.639572   21847 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:16:34.639628   21847 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:16:34.640042   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:16:34.657262   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0728 15:16:34.681761   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:16:34.698805   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/test-preload-20220728151611-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 15:16:34.715787   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:16:34.733276   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:16:34.750906   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:16:34.767655   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:16:34.785291   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:16:34.802231   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:16:34.819140   21847 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:16:34.835574   21847 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:16:34.847693   21847 ssh_runner.go:195] Run: openssl version
	I0728 15:16:34.853039   21847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:16:34.860727   21847 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:16:34.864756   21847 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:16:34.864800   21847 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:16:34.870173   21847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:16:34.877734   21847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:16:34.885775   21847 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:16:34.889954   21847 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:16:34.889993   21847 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:16:34.895087   21847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:16:34.902535   21847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:16:34.910408   21847 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:16:34.914439   21847 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:16:34.914481   21847 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:16:34.919839   21847 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:16:34.927568   21847 kubeadm.go:395] StartCluster: {Name:test-preload-20220728151611-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName:test-preload-20220728151611-12923 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:16:34.927659   21847 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:16:34.956047   21847 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:16:34.963702   21847 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:16:34.970969   21847 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:16:34.971011   21847 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:16:34.977922   21847 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:16:34.977946   21847 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:16:35.695277   21847 out.go:204]   - Generating certificates and keys ...
	I0728 15:16:38.580993   21847 out.go:204]   - Booting up control plane ...
	W0728 15:18:33.517303   21847 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220728151611-12923 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220728151611-12923 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0728 22:16:35.033482    1576 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0728 22:16:35.033554    1576 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 22:16:38.600776    1576 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 22:16:38.601660    1576 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [test-preload-20220728151611-12923 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [test-preload-20220728151611-12923 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0728 22:16:35.033482    1576 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W0728 22:16:35.033554    1576 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 22:16:38.600776    1576 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 22:16:38.601660    1576 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0728 15:18:33.517335   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0728 15:18:33.936832   21847 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:18:33.946008   21847 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:18:33.946056   21847 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:18:33.953281   21847 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:18:33.953304   21847 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:18:34.643805   21847 out.go:204]   - Generating certificates and keys ...
	I0728 15:18:35.381384   21847 out.go:204]   - Booting up control plane ...
	I0728 15:20:30.291958   21847 kubeadm.go:397] StartCluster complete in 3m55.366894141s
	I0728 15:20:30.292047   21847 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:20:30.320717   21847 logs.go:274] 0 containers: []
	W0728 15:20:30.320729   21847 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:20:30.320786   21847 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:20:30.351609   21847 logs.go:274] 0 containers: []
	W0728 15:20:30.351621   21847 logs.go:276] No container was found matching "etcd"
	I0728 15:20:30.351678   21847 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:20:30.380676   21847 logs.go:274] 0 containers: []
	W0728 15:20:30.380688   21847 logs.go:276] No container was found matching "coredns"
	I0728 15:20:30.380743   21847 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:20:30.410060   21847 logs.go:274] 0 containers: []
	W0728 15:20:30.410073   21847 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:20:30.410130   21847 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:20:30.438984   21847 logs.go:274] 0 containers: []
	W0728 15:20:30.438996   21847 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:20:30.439058   21847 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:20:30.468170   21847 logs.go:274] 0 containers: []
	W0728 15:20:30.468182   21847 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:20:30.468237   21847 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:20:30.496136   21847 logs.go:274] 0 containers: []
	W0728 15:20:30.496148   21847 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:20:30.496204   21847 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:20:30.525507   21847 logs.go:274] 0 containers: []
	W0728 15:20:30.525520   21847 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:20:30.525529   21847 logs.go:123] Gathering logs for Docker ...
	I0728 15:20:30.525537   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:20:30.541271   21847 logs.go:123] Gathering logs for container status ...
	I0728 15:20:30.541285   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:20:32.595943   21847 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054667133s)
	I0728 15:20:32.596049   21847 logs.go:123] Gathering logs for kubelet ...
	I0728 15:20:32.596056   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:20:32.635719   21847 logs.go:123] Gathering logs for dmesg ...
	I0728 15:20:32.635734   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:20:32.648208   21847 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:20:32.648221   21847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:20:32.699332   21847 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0728 15:20:32.699350   21847 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0728 22:18:34.007528    3853 validation.go:28] Cannot validate kubelet config - no validator is available
	W0728 22:18:34.007581    3853 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 22:18:35.371186    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 22:18:35.371856    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0728 15:20:32.699364   21847 out.go:239] * 
	* 
	W0728 15:20:32.699478   21847 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0728 22:18:34.007528    3853 validation.go:28] Cannot validate kubelet config - no validator is available
	W0728 22:18:34.007581    3853 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 22:18:35.371186    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 22:18:35.371856    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0728 22:18:34.007528    3853 validation.go:28] Cannot validate kubelet config - no validator is available
	W0728 22:18:34.007581    3853 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 22:18:35.371186    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 22:18:35.371856    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:20:32.699494   21847 out.go:239] * 
	* 
	W0728 15:20:32.700026   21847 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 15:20:32.762562   21847 out.go:177] 
	W0728 15:20:32.804641   21847 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0728 22:18:34.007528    3853 validation.go:28] Cannot validate kubelet config - no validator is available
	W0728 22:18:34.007581    3853 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 22:18:35.371186    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 22:18:35.371856    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W0728 22:18:34.007528    3853 validation.go:28] Cannot validate kubelet config - no validator is available
	W0728 22:18:34.007581    3853 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0728 22:18:35.371186    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0728 22:18:35.371856    3853 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:20:32.804742   21847 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0728 15:20:32.804793   21847 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0728 15:20:32.846601   21847 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:50: out/minikube-darwin-amd64 start -p test-preload-20220728151611-12923 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0 failed: exit status 109
panic.go:482: *** TestPreload FAILED at 2022-07-28 15:20:32.946295 -0700 PDT m=+2547.168511174
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPreload]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect test-preload-20220728151611-12923
helpers_test.go:235: (dbg) docker inspect test-preload-20220728151611-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b81c58afafcfd22acdd0aa7e5297727c37af934ba09287b0c5ea9cefb8ff2849",
	        "Created": "2022-07-28T22:16:13.852794368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 112412,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:16:14.157242832Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/b81c58afafcfd22acdd0aa7e5297727c37af934ba09287b0c5ea9cefb8ff2849/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b81c58afafcfd22acdd0aa7e5297727c37af934ba09287b0c5ea9cefb8ff2849/hostname",
	        "HostsPath": "/var/lib/docker/containers/b81c58afafcfd22acdd0aa7e5297727c37af934ba09287b0c5ea9cefb8ff2849/hosts",
	        "LogPath": "/var/lib/docker/containers/b81c58afafcfd22acdd0aa7e5297727c37af934ba09287b0c5ea9cefb8ff2849/b81c58afafcfd22acdd0aa7e5297727c37af934ba09287b0c5ea9cefb8ff2849-json.log",
	        "Name": "/test-preload-20220728151611-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "test-preload-20220728151611-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "test-preload-20220728151611-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2dc8282709293ce126c7ad6280dbd3ae709d6ab3315c25f6c13c305ff0efb001-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2dc8282709293ce126c7ad6280dbd3ae709d6ab3315c25f6c13c305ff0efb001/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2dc8282709293ce126c7ad6280dbd3ae709d6ab3315c25f6c13c305ff0efb001/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2dc8282709293ce126c7ad6280dbd3ae709d6ab3315c25f6c13c305ff0efb001/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "test-preload-20220728151611-12923",
	                "Source": "/var/lib/docker/volumes/test-preload-20220728151611-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "test-preload-20220728151611-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "test-preload-20220728151611-12923",
	                "name.minikube.sigs.k8s.io": "test-preload-20220728151611-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eef5982cc9bc5d2f30ee919d2d38c0dcd088be7f322d0c0fa2375b4456ce9fca",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56810"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56811"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56813"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "56814"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/eef5982cc9bc",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "test-preload-20220728151611-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b81c58afafcf",
	                        "test-preload-20220728151611-12923"
	                    ],
	                    "NetworkID": "a78bc5f9fbd3d281d38083229912511f548c6a5208e5da8e74d1425653a06e99",
	                    "EndpointID": "33ad82ec7be1f32a71c54ae04df93a93bd36721376904c4c56e0b30ab865179e",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220728151611-12923 -n test-preload-20220728151611-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p test-preload-20220728151611-12923 -n test-preload-20220728151611-12923: exit status 6 (414.205615ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:20:33.417065   22252 status.go:413] kubeconfig endpoint: extract IP: "test-preload-20220728151611-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "test-preload-20220728151611-12923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "test-preload-20220728151611-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220728151611-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220728151611-12923: (2.518170529s)
--- FAIL: TestPreload (264.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.96503917.exe start -p running-upgrade-20220728152536-12923 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.96503917.exe start -p running-upgrade-20220728152536-12923 --memory=2200 --vm-driver=docker : exit status 70 (51.767581332s)

                                                
                                                
-- stdout --
	! [running-upgrade-20220728152536-12923] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig3707654259
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 22:26:09.804533592 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-20220728152536-12923" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 22:26:26.234660446 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-20220728152536-12923", then "minikube start -p running-upgrade-20220728152536-12923 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.26.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.26.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 4.00 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 23.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 53.66 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 68.77 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 80.41 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 100.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 122.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 143.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 165.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 184.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 201.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 217.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 234.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 249.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 264.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 276.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 289.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 303.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 316.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 333.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 348.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 374.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 387.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 399.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 412.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 439.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 451.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 462.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 471.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 496.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 512.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 521.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 533.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 22:26:26.234660446 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.96503917.exe start -p running-upgrade-20220728152536-12923 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.96503917.exe start -p running-upgrade-20220728152536-12923 --memory=2200 --vm-driver=docker : exit status 70 (4.4141977s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220728152536-12923] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig3097525062
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220728152536-12923" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.96503917.exe start -p running-upgrade-20220728152536-12923 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.96503917.exe start -p running-upgrade-20220728152536-12923 --memory=2200 --vm-driver=docker : exit status 70 (4.429545119s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220728152536-12923] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig1485799725
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220728152536-12923" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:482: *** TestRunningBinaryUpgrade FAILED at 2022-07-28 15:26:40.657134 -0700 PDT m=+2914.869089591
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-20220728152536-12923
helpers_test.go:235: (dbg) docker inspect running-upgrade-20220728152536-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c0ca82889e1ede12a0e1b20d4fb5b2a9e8eab408d71b1eac9e18133269ca2304",
	        "Created": "2022-07-28T22:26:17.998769727Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 147522,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:26:18.209524739Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/c0ca82889e1ede12a0e1b20d4fb5b2a9e8eab408d71b1eac9e18133269ca2304/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c0ca82889e1ede12a0e1b20d4fb5b2a9e8eab408d71b1eac9e18133269ca2304/hostname",
	        "HostsPath": "/var/lib/docker/containers/c0ca82889e1ede12a0e1b20d4fb5b2a9e8eab408d71b1eac9e18133269ca2304/hosts",
	        "LogPath": "/var/lib/docker/containers/c0ca82889e1ede12a0e1b20d4fb5b2a9e8eab408d71b1eac9e18133269ca2304/c0ca82889e1ede12a0e1b20d4fb5b2a9e8eab408d71b1eac9e18133269ca2304-json.log",
	        "Name": "/running-upgrade-20220728152536-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "running-upgrade-20220728152536-12923:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc5b64f7978659dc865ed7da4b2416fb6d4fcd00d1c76cabdf39e3e6b830346d-init/diff:/var/lib/docker/overlay2/75527f61fe1c74577086f5a0cce2297dfb25dcdc876ab959e46b3c61d79d16c5/diff:/var/lib/docker/overlay2/adf4f675ee1a5c7a0969326b1988b69ad8f94d94b8aa317f8a5f101daa01aec5/diff:/var/lib/docker/overlay2/91e9bfe5724a133e2602657b8314695cd642cc22788aa678e9a6fe26f41ac3a8/diff:/var/lib/docker/overlay2/64a5dc4c5c6e0c2ca57fc7d1f3f7b8f1ec960c60ad9cd53a819f887cf1458915/diff:/var/lib/docker/overlay2/e5c8dcadcad8ff90ad268a64ba4e60117231e19a86932b04a7db1fa024be3c86/diff:/var/lib/docker/overlay2/65b47a70bf5876042eca4ed9dbc08728657e160283bc801943c3fffbb340ce0f/diff:/var/lib/docker/overlay2/a966a60d87c48b193f1052ef2312f5399d2d0c28684d527a00ef795862ad2f86/diff:/var/lib/docker/overlay2/235ec13a881649eb25b7c6ed7f9cc14d8b2a8d1b5b03a1c6bd306f2f92cc49ac/diff:/var/lib/docker/overlay2/1f606f9ff294f29132a91e84bb0e400600cebc8529c4516ac34de1ddd0b01fd1/diff:/var/lib/docker/overlay2/a9e839
19a13e139fff94bf384f62e1385061b705dee0288aece77716f851d5bd/diff:/var/lib/docker/overlay2/ed5bc9b221d0f65ba5a1c158e59a3afc035d222a70673dd4a7591e1eec96661c/diff:/var/lib/docker/overlay2/23504c6d2bb74a35f1f62b55cc70999531271eb46a68f3de8e5f6fa370afcc92/diff:/var/lib/docker/overlay2/c0c1e1ab226a8f6be7ea1a2155264c5440dde2763c188c87b0f4147c032ed4fa/diff:/var/lib/docker/overlay2/ddceb8ca34e4f17bcd9c8c526da1968bf2370447d391f33bb49272973fff4c3c/diff:/var/lib/docker/overlay2/424bd5c93d5826037ef37255f04c2b8c52c087089e936e51c60aaaffc68a4a94/diff:/var/lib/docker/overlay2/0a96e39a584abac7143d0e741b9d5f13a5e6ba3bfe7ff933be8676e27c598c4e/diff:/var/lib/docker/overlay2/48cea15afbd051f76a5acd27bb40516b3003dbfd1657b8e069101bc0a4117e42/diff:/var/lib/docker/overlay2/f778ce187c19d815f174c37c9c067e8207c9cae92d061ef64bbcb50b849a7f06/diff:/var/lib/docker/overlay2/c48eb9a685ac16678a24297c706c32ec213cde3512c075e867634c6845eccd91/diff:/var/lib/docker/overlay2/6c37355a10c9b0a71d6151bb7a607d35a3857290c5478e89a1b3eb771ebf9e27/diff:/var/lib/d
ocker/overlay2/28fbf0eab797492cf3c07b0822193d73a7d34cda40c5c234466eb199c3bdbd0a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc5b64f7978659dc865ed7da4b2416fb6d4fcd00d1c76cabdf39e3e6b830346d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc5b64f7978659dc865ed7da4b2416fb6d4fcd00d1c76cabdf39e3e6b830346d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc5b64f7978659dc865ed7da4b2416fb6d4fcd00d1c76cabdf39e3e6b830346d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220728152536-12923",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220728152536-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220728152536-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220728152536-12923",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220728152536-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6084fb0f2e7ca7770b6099dccdf3742f4c82ebc01fbb11c46ed9b957c301197d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57417"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57416"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6084fb0f2e7c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "f353540bd2c37c7df8d169860dff80ce8a417ffcc6c1609c9a53fe9e5bc50490",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "76531f69b84c5a1c7848fda07e174d7f5bc8f08be4ad7f1191b14db3dc0aeb08",
	                    "EndpointID": "f353540bd2c37c7df8d169860dff80ce8a417ffcc6c1609c9a53fe9e5bc50490",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220728152536-12923 -n running-upgrade-20220728152536-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220728152536-12923 -n running-upgrade-20220728152536-12923: exit status 6 (406.319782ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:26:41.117631   24373 status.go:413] kubeconfig endpoint: extract IP: "running-upgrade-20220728152536-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-20220728152536-12923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-20220728152536-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220728152536-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220728152536-12923: (2.382268563s)
--- FAIL: TestRunningBinaryUpgrade (67.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (346.65s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220728152732-12923 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0728 15:27:37.940395   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 15:28:04.791328   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:28:04.797026   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:28:04.807347   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:28:04.827617   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:28:04.868789   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:28:04.948972   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:28:05.110313   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:28:05.430984   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:28:06.071608   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:28:07.351940   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:28:09.912316   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:28:15.034504   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220728152732-12923 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m13.8844039s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220728152732-12923] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-20220728152732-12923 in cluster kubernetes-upgrade-20220728152732-12923
	* Pulling base image ...
	* Downloading Kubernetes v1.16.0 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 15:27:32.055920   24728 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:27:32.056104   24728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:27:32.056110   24728 out.go:309] Setting ErrFile to fd 2...
	I0728 15:27:32.056114   24728 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:27:32.056217   24728 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:27:32.056719   24728 out.go:303] Setting JSON to false
	I0728 15:27:32.071911   24728 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8294,"bootTime":1659038958,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:27:32.071997   24728 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:27:32.094319   24728 out.go:177] * [kubernetes-upgrade-20220728152732-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:27:32.136485   24728 notify.go:193] Checking for updates...
	I0728 15:27:32.158190   24728 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:27:32.179083   24728 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:27:32.200211   24728 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:27:32.221132   24728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:27:32.242146   24728 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:27:32.263558   24728 config.go:178] Loaded profile config "cert-expiration-20220728152452-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:27:32.263612   24728 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:27:32.332560   24728 docker.go:137] docker version: linux-20.10.17
	I0728 15:27:32.332666   24728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:27:32.465599   24728 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-28 22:27:32.400062029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:27:32.509402   24728 out.go:177] * Using the docker driver based on user configuration
	I0728 15:27:32.531434   24728 start.go:284] selected driver: docker
	I0728 15:27:32.531468   24728 start.go:808] validating driver "docker" against <nil>
	I0728 15:27:32.531494   24728 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:27:32.534866   24728 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:27:32.665440   24728 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-28 22:27:32.602529845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:27:32.665547   24728 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0728 15:27:32.665687   24728 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 15:27:32.687416   24728 out.go:177] * Using Docker Desktop driver with root privileges
	I0728 15:27:32.709176   24728 cni.go:95] Creating CNI manager for ""
	I0728 15:27:32.709196   24728 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:27:32.709203   24728 start_flags.go:310] config:
	{Name:kubernetes-upgrade-20220728152732-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220728152732-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:27:32.730264   24728 out.go:177] * Starting control plane node kubernetes-upgrade-20220728152732-12923 in cluster kubernetes-upgrade-20220728152732-12923
	I0728 15:27:32.772215   24728 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:27:32.793367   24728 out.go:177] * Pulling base image ...
	I0728 15:27:32.836324   24728 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:27:32.836351   24728 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:27:32.899391   24728 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:27:32.899419   24728 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:27:32.902176   24728 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0728 15:27:32.902204   24728 cache.go:57] Caching tarball of preloaded images
	I0728 15:27:32.902414   24728 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:27:32.940427   24728 out.go:177] * Downloading Kubernetes v1.16.0 preload ...
	I0728 15:27:32.999967   24728 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0728 15:27:33.094013   24728 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0728 15:27:37.605522   24728 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0728 15:27:37.605660   24728 preload.go:256] verifying checksumm of /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0728 15:27:38.153775   24728 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0728 15:27:38.153859   24728 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/config.json ...
	I0728 15:27:38.153888   24728 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/config.json: {Name:mkf0bb81aeed84ef677b69b1f181f4e4cbf8e1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:27:38.154147   24728 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:27:38.154177   24728 start.go:370] acquiring machines lock for kubernetes-upgrade-20220728152732-12923: {Name:mke6099dd56aa9e3d6a29fd0d07fea3ae3811312 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:27:38.154264   24728 start.go:374] acquired machines lock for "kubernetes-upgrade-20220728152732-12923" in 79.906µs
	I0728 15:27:38.154287   24728 start.go:92] Provisioning new machine with config: &{Name:kubernetes-upgrade-20220728152732-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-2022072815273
2-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 15:27:38.154330   24728 start.go:132] createHost starting for "" (driver="docker")
	I0728 15:27:38.198328   24728 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0728 15:27:38.198763   24728 start.go:166] libmachine.API.Create for "kubernetes-upgrade-20220728152732-12923" (driver="docker")
	I0728 15:27:38.198813   24728 client.go:168] LocalClient.Create starting
	I0728 15:27:38.199000   24728 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem
	I0728 15:27:38.199067   24728 main.go:134] libmachine: Decoding PEM data...
	I0728 15:27:38.199092   24728 main.go:134] libmachine: Parsing certificate...
	I0728 15:27:38.199191   24728 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem
	I0728 15:27:38.199237   24728 main.go:134] libmachine: Decoding PEM data...
	I0728 15:27:38.199253   24728 main.go:134] libmachine: Parsing certificate...
	I0728 15:27:38.200111   24728 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220728152732-12923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0728 15:27:38.263470   24728 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220728152732-12923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0728 15:27:38.263556   24728 network_create.go:272] running [docker network inspect kubernetes-upgrade-20220728152732-12923] to gather additional debugging logs...
	I0728 15:27:38.263575   24728 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-20220728152732-12923
	W0728 15:27:38.325329   24728 cli_runner.go:211] docker network inspect kubernetes-upgrade-20220728152732-12923 returned with exit code 1
	I0728 15:27:38.325357   24728 network_create.go:275] error running [docker network inspect kubernetes-upgrade-20220728152732-12923]: docker network inspect kubernetes-upgrade-20220728152732-12923: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-20220728152732-12923
	I0728 15:27:38.325391   24728 network_create.go:277] output of [docker network inspect kubernetes-upgrade-20220728152732-12923]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-20220728152732-12923
	
	** /stderr **
	I0728 15:27:38.325496   24728 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0728 15:27:38.386359   24728 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000d84410] misses:0}
	I0728 15:27:38.386396   24728 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:27:38.386411   24728 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220728152732-12923 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0728 15:27:38.386491   24728 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220728152732-12923 kubernetes-upgrade-20220728152732-12923
	W0728 15:27:38.447710   24728 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220728152732-12923 kubernetes-upgrade-20220728152732-12923 returned with exit code 1
	W0728 15:27:38.447749   24728 network_create.go:107] failed to create docker network kubernetes-upgrade-20220728152732-12923 192.168.49.0/24, will retry: subnet is taken
	I0728 15:27:38.448017   24728 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d84410] amended:false}} dirty:map[] misses:0}
	I0728 15:27:38.448031   24728 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:27:38.448248   24728 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d84410] amended:true}} dirty:map[192.168.49.0:0xc000d84410 192.168.58.0:0xc0007aa030] misses:0}
	I0728 15:27:38.448261   24728 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:27:38.448268   24728 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220728152732-12923 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0728 15:27:38.448332   24728 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220728152732-12923 kubernetes-upgrade-20220728152732-12923
	W0728 15:27:38.509215   24728 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220728152732-12923 kubernetes-upgrade-20220728152732-12923 returned with exit code 1
	W0728 15:27:38.509282   24728 network_create.go:107] failed to create docker network kubernetes-upgrade-20220728152732-12923 192.168.58.0/24, will retry: subnet is taken
	I0728 15:27:38.509542   24728 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d84410] amended:true}} dirty:map[192.168.49.0:0xc000d84410 192.168.58.0:0xc0007aa030] misses:1}
	I0728 15:27:38.509562   24728 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:27:38.509791   24728 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d84410] amended:true}} dirty:map[192.168.49.0:0xc000d84410 192.168.58.0:0xc0007aa030 192.168.67.0:0xc000d84448] misses:1}
	I0728 15:27:38.509809   24728 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:27:38.509818   24728 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220728152732-12923 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0728 15:27:38.509885   24728 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220728152732-12923 kubernetes-upgrade-20220728152732-12923
	W0728 15:27:38.570925   24728 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220728152732-12923 kubernetes-upgrade-20220728152732-12923 returned with exit code 1
	W0728 15:27:38.570959   24728 network_create.go:107] failed to create docker network kubernetes-upgrade-20220728152732-12923 192.168.67.0/24, will retry: subnet is taken
	I0728 15:27:38.571260   24728 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d84410] amended:true}} dirty:map[192.168.49.0:0xc000d84410 192.168.58.0:0xc0007aa030 192.168.67.0:0xc000d84448] misses:2}
	I0728 15:27:38.571278   24728 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:27:38.571484   24728 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d84410] amended:true}} dirty:map[192.168.49.0:0xc000d84410 192.168.58.0:0xc0007aa030 192.168.67.0:0xc000d84448 192.168.76.0:0xc0007aa068] misses:2}
	I0728 15:27:38.571499   24728 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:27:38.571506   24728 network_create.go:115] attempt to create docker network kubernetes-upgrade-20220728152732-12923 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0728 15:27:38.571564   24728 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-20220728152732-12923 kubernetes-upgrade-20220728152732-12923
	I0728 15:27:38.667200   24728 network_create.go:99] docker network kubernetes-upgrade-20220728152732-12923 192.168.76.0/24 created
	I0728 15:27:38.667239   24728 kic.go:106] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-20220728152732-12923" container
	I0728 15:27:38.667357   24728 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0728 15:27:38.730408   24728 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-20220728152732-12923 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220728152732-12923 --label created_by.minikube.sigs.k8s.io=true
	I0728 15:27:38.791681   24728 oci.go:103] Successfully created a docker volume kubernetes-upgrade-20220728152732-12923
	I0728 15:27:38.791819   24728 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-20220728152732-12923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220728152732-12923 --entrypoint /usr/bin/test -v kubernetes-upgrade-20220728152732-12923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0728 15:27:39.230615   24728 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-20220728152732-12923
	I0728 15:27:39.230827   24728 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:27:39.230842   24728 kic.go:179] Starting extracting preloaded images to volume ...
	I0728 15:27:39.230947   24728 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220728152732-12923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0728 15:27:42.932853   24728 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-20220728152732-12923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (3.701880872s)
	I0728 15:27:42.932882   24728 kic.go:188] duration metric: took 3.702089 seconds to extract preloaded images to volume
	I0728 15:27:42.932995   24728 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0728 15:27:43.064240   24728 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-20220728152732-12923 --name kubernetes-upgrade-20220728152732-12923 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-20220728152732-12923 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-20220728152732-12923 --network kubernetes-upgrade-20220728152732-12923 --ip 192.168.76.2 --volume kubernetes-upgrade-20220728152732-12923:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0728 15:27:43.425325   24728 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220728152732-12923 --format={{.State.Running}}
	I0728 15:27:43.490462   24728 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220728152732-12923 --format={{.State.Status}}
	I0728 15:27:43.557876   24728 cli_runner.go:164] Run: docker exec kubernetes-upgrade-20220728152732-12923 stat /var/lib/dpkg/alternatives/iptables
	I0728 15:27:43.663705   24728 oci.go:144] the created container "kubernetes-upgrade-20220728152732-12923" has a running status.
	I0728 15:27:43.663739   24728 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728152732-12923/id_rsa...
	I0728 15:27:43.769627   24728 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728152732-12923/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0728 15:27:43.883169   24728 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220728152732-12923 --format={{.State.Status}}
	I0728 15:27:43.947255   24728 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0728 15:27:43.947274   24728 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-20220728152732-12923 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0728 15:27:44.057187   24728 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220728152732-12923 --format={{.State.Status}}
	I0728 15:27:44.121884   24728 machine.go:88] provisioning docker machine ...
	I0728 15:27:44.121921   24728 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-20220728152732-12923"
	I0728 15:27:44.123997   24728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:44.189253   24728 main.go:134] libmachine: Using SSH client type: native
	I0728 15:27:44.189459   24728 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57530 <nil> <nil>}
	I0728 15:27:44.189474   24728 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20220728152732-12923 && echo "kubernetes-upgrade-20220728152732-12923" | sudo tee /etc/hostname
	I0728 15:27:44.319888   24728 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20220728152732-12923
	
	I0728 15:27:44.319999   24728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:44.387967   24728 main.go:134] libmachine: Using SSH client type: native
	I0728 15:27:44.388130   24728 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57530 <nil> <nil>}
	I0728 15:27:44.388147   24728 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20220728152732-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20220728152732-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20220728152732-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:27:44.506479   24728 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:27:44.506521   24728 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:27:44.506540   24728 ubuntu.go:177] setting up certificates
	I0728 15:27:44.506549   24728 provision.go:83] configureAuth start
	I0728 15:27:44.506618   24728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:44.570885   24728 provision.go:138] copyHostCerts
	I0728 15:27:44.571038   24728 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:27:44.571046   24728 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:27:44.571159   24728 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:27:44.571357   24728 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:27:44.571366   24728 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:27:44.571431   24728 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:27:44.571584   24728 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:27:44.571590   24728 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:27:44.571650   24728 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:27:44.571770   24728 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20220728152732-12923 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-20220728152732-12923]
	I0728 15:27:44.682042   24728 provision.go:172] copyRemoteCerts
	I0728 15:27:44.682099   24728 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:27:44.682140   24728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:44.746470   24728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728152732-12923/id_rsa Username:docker}
	I0728 15:27:44.834525   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:27:44.851399   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0728 15:27:44.868428   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 15:27:44.884435   24728 provision.go:86] duration metric: configureAuth took 377.877392ms
	I0728 15:27:44.884451   24728 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:27:44.884591   24728 config.go:178] Loaded profile config "kubernetes-upgrade-20220728152732-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0728 15:27:44.884645   24728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:44.949929   24728 main.go:134] libmachine: Using SSH client type: native
	I0728 15:27:44.950074   24728 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57530 <nil> <nil>}
	I0728 15:27:44.950089   24728 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:27:45.071770   24728 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:27:45.071787   24728 ubuntu.go:71] root file system type: overlay
	I0728 15:27:45.071949   24728 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:27:45.072036   24728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:45.136656   24728 main.go:134] libmachine: Using SSH client type: native
	I0728 15:27:45.136833   24728 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57530 <nil> <nil>}
	I0728 15:27:45.136905   24728 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:27:45.264344   24728 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:27:45.264439   24728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:45.328304   24728 main.go:134] libmachine: Using SSH client type: native
	I0728 15:27:45.328477   24728 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57530 <nil> <nil>}
	I0728 15:27:45.328491   24728 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:27:45.903564   24728 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 22:27:45.282016243 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0728 15:27:45.903635   24728 machine.go:91] provisioned docker machine in 1.781756134s
	I0728 15:27:45.903642   24728 client.go:171] LocalClient.Create took 7.704925066s
	I0728 15:27:45.903656   24728 start.go:174] duration metric: libmachine.API.Create for "kubernetes-upgrade-20220728152732-12923" took 7.704999686s
	I0728 15:27:45.903667   24728 start.go:307] post-start starting for "kubernetes-upgrade-20220728152732-12923" (driver="docker")
	I0728 15:27:45.903672   24728 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:27:45.903750   24728 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:27:45.903817   24728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:45.973630   24728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728152732-12923/id_rsa Username:docker}
	I0728 15:27:46.060548   24728 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:27:46.063861   24728 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:27:46.063880   24728 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:27:46.063887   24728 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:27:46.063892   24728 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:27:46.063902   24728 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:27:46.064003   24728 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:27:46.064172   24728 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:27:46.064352   24728 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:27:46.071307   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:27:46.089158   24728 start.go:310] post-start completed in 185.476195ms
	I0728 15:27:46.089710   24728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:46.153664   24728 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/config.json ...
	I0728 15:27:46.154154   24728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:27:46.154205   24728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:46.218309   24728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728152732-12923/id_rsa Username:docker}
	I0728 15:27:46.303335   24728 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:27:46.307582   24728 start.go:135] duration metric: createHost completed in 8.15335195s
	I0728 15:27:46.307599   24728 start.go:82] releasing machines lock for "kubernetes-upgrade-20220728152732-12923", held for 8.153435033s
	I0728 15:27:46.307690   24728 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:46.373258   24728 ssh_runner.go:195] Run: systemctl --version
	I0728 15:27:46.373260   24728 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:27:46.373320   24728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:46.373351   24728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:46.439535   24728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728152732-12923/id_rsa Username:docker}
	I0728 15:27:46.441627   24728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57530 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728152732-12923/id_rsa Username:docker}
	I0728 15:27:46.730164   24728 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:27:46.739733   24728 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:27:46.739785   24728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:27:46.748603   24728 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:27:46.761150   24728 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:27:46.833006   24728 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:27:46.901554   24728 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:27:46.969845   24728 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:27:47.175235   24728 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:27:47.212065   24728 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:27:47.268022   24728 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0728 15:27:47.268123   24728 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-20220728152732-12923 dig +short host.docker.internal
	I0728 15:27:47.385882   24728 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:27:47.385978   24728 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:27:47.390019   24728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:27:47.399375   24728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:27:47.464179   24728 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:27:47.464259   24728 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:27:47.494235   24728 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0728 15:27:47.494253   24728 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:27:47.494334   24728 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:27:47.524119   24728 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0728 15:27:47.524142   24728 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:27:47.524212   24728 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:27:47.600662   24728 cni.go:95] Creating CNI manager for ""
	I0728 15:27:47.600674   24728 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:27:47.600689   24728 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:27:47.600706   24728 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20220728152732-12923 NodeName:kubernetes-upgrade-20220728152732-12923 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd
ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:27:47.600819   24728 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-20220728152732-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-20220728152732-12923
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:27:47.600898   24728 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-20220728152732-12923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220728152732-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:27:47.600959   24728 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0728 15:27:47.608587   24728 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:27:47.608639   24728 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:27:47.615609   24728 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0728 15:27:47.629227   24728 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:27:47.641529   24728 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0728 15:27:47.657444   24728 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:27:47.661507   24728 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:27:47.670990   24728 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923 for IP: 192.168.76.2
	I0728 15:27:47.671103   24728 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:27:47.671150   24728 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:27:47.671195   24728 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/client.key
	I0728 15:27:47.671207   24728 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/client.crt with IP's: []
	I0728 15:27:47.761654   24728 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/client.crt ...
	I0728 15:27:47.761669   24728 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/client.crt: {Name:mk537459a4ebe63f2aba9d08a75d50b0c7524e79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:27:47.761939   24728 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/client.key ...
	I0728 15:27:47.761947   24728 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/client.key: {Name:mkf97d1cf272c0cc71b984d39b919ebc3452cdb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:27:47.762145   24728 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/apiserver.key.31bdca25
	I0728 15:27:47.762166   24728 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0728 15:27:47.977553   24728 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/apiserver.crt.31bdca25 ...
	I0728 15:27:47.977567   24728 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/apiserver.crt.31bdca25: {Name:mk0bd344fabdfb00cb510cb2b289c286d67a0d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:27:47.977852   24728 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/apiserver.key.31bdca25 ...
	I0728 15:27:47.977860   24728 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/apiserver.key.31bdca25: {Name:mk58903d135fd87b12b1340fc97ed7a71e771a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:27:47.978053   24728 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/apiserver.crt
	I0728 15:27:47.978255   24728 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/apiserver.key
	I0728 15:27:47.978404   24728 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/proxy-client.key
	I0728 15:27:47.978419   24728 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/proxy-client.crt with IP's: []
	I0728 15:27:48.123698   24728 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/proxy-client.crt ...
	I0728 15:27:48.123713   24728 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/proxy-client.crt: {Name:mk252ac5746c77b9e7c40472ea7633ee71a83243 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:27:48.123996   24728 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/proxy-client.key ...
	I0728 15:27:48.124010   24728 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/proxy-client.key: {Name:mkeb673222ac8a107ccc412b2991dd79e2a82c9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:27:48.124417   24728 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:27:48.124458   24728 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:27:48.124469   24728 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:27:48.124501   24728 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:27:48.124531   24728 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:27:48.124565   24728 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:27:48.124633   24728 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:27:48.125157   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:27:48.143080   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 15:27:48.160550   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:27:48.177167   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 15:27:48.193357   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:27:48.209749   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:27:48.226163   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:27:48.242455   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:27:48.258811   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:27:48.275770   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:27:48.292361   24728 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:27:48.309096   24728 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:27:48.321451   24728 ssh_runner.go:195] Run: openssl version
	I0728 15:27:48.326737   24728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:27:48.334585   24728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:27:48.338619   24728 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:27:48.338665   24728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:27:48.343709   24728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:27:48.351271   24728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:27:48.358962   24728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:27:48.363000   24728 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:27:48.363043   24728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:27:48.367882   24728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:27:48.375374   24728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:27:48.382887   24728 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:27:48.386758   24728 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:27:48.386807   24728 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:27:48.391935   24728 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:27:48.399613   24728 kubeadm.go:395] StartCluster: {Name:kubernetes-upgrade-20220728152732-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-20220728152732-12923 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
}
	I0728 15:27:48.399703   24728 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:27:48.431687   24728 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:27:48.439190   24728 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:27:48.446425   24728 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:27:48.446468   24728 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:27:48.453695   24728 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:27:48.453722   24728 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:27:49.189259   24728 out.go:204]   - Generating certificates and keys ...
	I0728 15:27:51.668360   24728 out.go:204]   - Booting up control plane ...
	W0728 15:29:46.582420   24728 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220728152732-12923 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220728152732-12923 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-20220728152732-12923 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-20220728152732-12923 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0728 15:29:46.582453   24728 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0728 15:29:47.006387   24728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:29:47.020711   24728 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:29:47.020774   24728 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:29:47.029744   24728 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:29:47.029771   24728 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:29:47.753351   24728 out.go:204]   - Generating certificates and keys ...
	I0728 15:29:48.333560   24728 out.go:204]   - Booting up control plane ...
	I0728 15:31:43.250579   24728 kubeadm.go:397] StartCluster complete in 3m54.854072896s
	I0728 15:31:43.250654   24728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:31:43.279328   24728 logs.go:274] 0 containers: []
	W0728 15:31:43.279342   24728 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:31:43.279403   24728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:31:43.307063   24728 logs.go:274] 0 containers: []
	W0728 15:31:43.307077   24728 logs.go:276] No container was found matching "etcd"
	I0728 15:31:43.307156   24728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:31:43.335842   24728 logs.go:274] 0 containers: []
	W0728 15:31:43.335855   24728 logs.go:276] No container was found matching "coredns"
	I0728 15:31:43.335919   24728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:31:43.363935   24728 logs.go:274] 0 containers: []
	W0728 15:31:43.363948   24728 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:31:43.364020   24728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:31:43.392114   24728 logs.go:274] 0 containers: []
	W0728 15:31:43.392126   24728 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:31:43.392184   24728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:31:43.420358   24728 logs.go:274] 0 containers: []
	W0728 15:31:43.420370   24728 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:31:43.420431   24728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:31:43.448451   24728 logs.go:274] 0 containers: []
	W0728 15:31:43.448464   24728 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:31:43.448518   24728 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:31:43.476707   24728 logs.go:274] 0 containers: []
	W0728 15:31:43.476719   24728 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:31:43.476727   24728 logs.go:123] Gathering logs for kubelet ...
	I0728 15:31:43.476734   24728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:31:43.518435   24728 logs.go:123] Gathering logs for dmesg ...
	I0728 15:31:43.518450   24728 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:31:43.531333   24728 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:31:43.531346   24728 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:31:43.586684   24728 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:31:43.586697   24728 logs.go:123] Gathering logs for Docker ...
	I0728 15:31:43.586704   24728 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:31:43.602421   24728 logs.go:123] Gathering logs for container status ...
	I0728 15:31:43.602435   24728 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:31:45.662997   24728 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060578093s)
	W0728 15:31:45.663168   24728 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0728 15:31:45.663191   24728 out.go:239] * 
	* 
	W0728 15:31:45.663363   24728 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:31:45.663382   24728 out.go:239] * 
	* 
	W0728 15:31:45.663988   24728 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 15:31:45.729070   24728 out.go:177] 
	W0728 15:31:45.772842   24728 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:31:45.772950   24728 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0728 15:31:45.773018   24728 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0728 15:31:45.815041   24728 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220728152732-12923 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220728152732-12923
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220728152732-12923: (1.632148247s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220728152732-12923 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220728152732-12923 status --format={{.Host}}: exit status 7 (114.038491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220728152732-12923 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220728152732-12923 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker : (35.574364135s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220728152732-12923 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220728152732-12923 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220728152732-12923 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (439.709212ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220728152732-12923] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.24.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220728152732-12923
	    minikube start -p kubernetes-upgrade-20220728152732-12923 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220728152732-129232 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.24.3, by running:
	    
	    minikube start -p kubernetes-upgrade-20220728152732-12923 --kubernetes-version=v1.24.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220728152732-12923 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker 
E0728 15:32:37.937929   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220728152732-12923 --memory=2200 --kubernetes-version=v1.24.3 --alsologtostderr -v=1 --driver=docker : (46.156727084s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-07-28 15:33:09.908736 -0700 PDT m=+3304.125836616
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-20220728152732-12923
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-20220728152732-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "43528b2d5c7f4a831ccca4e82c9b32b12612cc6093707aff3499fecaa9a73a72",
	        "Created": "2022-07-28T22:27:43.151586841Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 168804,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:31:48.953856168Z",
	            "FinishedAt": "2022-07-28T22:31:46.426452624Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/43528b2d5c7f4a831ccca4e82c9b32b12612cc6093707aff3499fecaa9a73a72/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/43528b2d5c7f4a831ccca4e82c9b32b12612cc6093707aff3499fecaa9a73a72/hostname",
	        "HostsPath": "/var/lib/docker/containers/43528b2d5c7f4a831ccca4e82c9b32b12612cc6093707aff3499fecaa9a73a72/hosts",
	        "LogPath": "/var/lib/docker/containers/43528b2d5c7f4a831ccca4e82c9b32b12612cc6093707aff3499fecaa9a73a72/43528b2d5c7f4a831ccca4e82c9b32b12612cc6093707aff3499fecaa9a73a72-json.log",
	        "Name": "/kubernetes-upgrade-20220728152732-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-20220728152732-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-20220728152732-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/54b2ab75d3cccf5736f6458068ea4b162efa04447c11c539eacac11cb8d832e4-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/54b2ab75d3cccf5736f6458068ea4b162efa04447c11c539eacac11cb8d832e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/54b2ab75d3cccf5736f6458068ea4b162efa04447c11c539eacac11cb8d832e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/54b2ab75d3cccf5736f6458068ea4b162efa04447c11c539eacac11cb8d832e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-20220728152732-12923",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-20220728152732-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-20220728152732-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-20220728152732-12923",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-20220728152732-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b543097fe50c156e416b2d0123a20dde4dce53f74db44e301b6a86ff547e9c0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57736"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57737"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57738"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57739"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57740"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7b543097fe50",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-20220728152732-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "43528b2d5c7f",
	                        "kubernetes-upgrade-20220728152732-12923"
	                    ],
	                    "NetworkID": "1feb5cd5dfa70d81b87c265071617e991c3c086e03fe58777e235f300d60e5a4",
	                    "EndpointID": "0a5a662cd8f43ff4601a5ced2522e6ef1fe6dfdb0e9043810f8ba389222eaf06",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-20220728152732-12923 -n kubernetes-upgrade-20220728152732-12923
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220728152732-12923 logs -n 25

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220728152732-12923 logs -n 25: (3.090863603s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p                                      | cert-options-20220728152503-12923       | jenkins | v1.26.0 | 28 Jul 22 15:25 PDT | 28 Jul 22 15:25 PDT |
	|         | cert-options-20220728152503-12923       |                                         |         |         |                     |                     |
	|         | -- sudo cat                             |                                         |         |         |                     |                     |
	|         | /etc/kubernetes/admin.conf              |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-options-20220728152503-12923       | jenkins | v1.26.0 | 28 Jul 22 15:25 PDT | 28 Jul 22 15:25 PDT |
	|         | cert-options-20220728152503-12923       |                                         |         |         |                     |                     |
	| delete  | -p                                      | running-upgrade-20220728152536-12923    | jenkins | v1.26.0 | 28 Jul 22 15:26 PDT | 28 Jul 22 15:26 PDT |
	|         | running-upgrade-20220728152536-12923    |                                         |         |         |                     |                     |
	| delete  | -p                                      | missing-upgrade-20220728152643-12923    | jenkins | v1.26.0 | 28 Jul 22 15:27 PDT | 28 Jul 22 15:27 PDT |
	|         | missing-upgrade-20220728152643-12923    |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220728152732-12923 | jenkins | v1.26.0 | 28 Jul 22 15:27 PDT |                     |
	|         | kubernetes-upgrade-20220728152732-12923 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-expiration-20220728152452-12923    | jenkins | v1.26.0 | 28 Jul 22 15:28 PDT | 28 Jul 22 15:28 PDT |
	|         | cert-expiration-20220728152452-12923    |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --cert-expiration=8760h                 |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-expiration-20220728152452-12923    | jenkins | v1.26.0 | 28 Jul 22 15:28 PDT | 28 Jul 22 15:28 PDT |
	|         | cert-expiration-20220728152452-12923    |                                         |         |         |                     |                     |
	| delete  | -p                                      | stopped-upgrade-20220728152857-12923    | jenkins | v1.26.0 | 28 Jul 22 15:29 PDT | 28 Jul 22 15:29 PDT |
	|         | stopped-upgrade-20220728152857-12923    |                                         |         |         |                     |                     |
	| start   | -p pause-20220728152948-12923           | pause-20220728152948-12923              | jenkins | v1.26.0 | 28 Jul 22 15:29 PDT | 28 Jul 22 15:30 PDT |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --install-addons=false                  |                                         |         |         |                     |                     |
	|         | --wait=all --driver=docker              |                                         |         |         |                     |                     |
	| start   | -p pause-20220728152948-12923           | pause-20220728152948-12923              | jenkins | v1.26.0 | 28 Jul 22 15:30 PDT | 28 Jul 22 15:31 PDT |
	|         | --alsologtostderr -v=1                  |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| pause   | -p pause-20220728152948-12923           | pause-20220728152948-12923              | jenkins | v1.26.0 | 28 Jul 22 15:31 PDT | 28 Jul 22 15:31 PDT |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	| stop    | -p                                      | kubernetes-upgrade-20220728152732-12923 | jenkins | v1.26.0 | 28 Jul 22 15:31 PDT | 28 Jul 22 15:31 PDT |
	|         | kubernetes-upgrade-20220728152732-12923 |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220728152732-12923 | jenkins | v1.26.0 | 28 Jul 22 15:31 PDT | 28 Jul 22 15:32 PDT |
	|         | kubernetes-upgrade-20220728152732-12923 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	| delete  | -p pause-20220728152948-12923           | pause-20220728152948-12923              | jenkins | v1.26.0 | 28 Jul 22 15:32 PDT | 28 Jul 22 15:32 PDT |
	| start   | -p                                      | NoKubernetes-20220728153211-12923       | jenkins | v1.26.0 | 28 Jul 22 15:32 PDT |                     |
	|         | NoKubernetes-20220728153211-12923       |                                         |         |         |                     |                     |
	|         | --no-kubernetes                         |                                         |         |         |                     |                     |
	|         | --kubernetes-version=1.20               |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | NoKubernetes-20220728153211-12923       | jenkins | v1.26.0 | 28 Jul 22 15:32 PDT | 28 Jul 22 15:32 PDT |
	|         | NoKubernetes-20220728153211-12923       |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220728152732-12923 | jenkins | v1.26.0 | 28 Jul 22 15:32 PDT |                     |
	|         | kubernetes-upgrade-20220728152732-12923 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0            |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220728152732-12923 | jenkins | v1.26.0 | 28 Jul 22 15:32 PDT | 28 Jul 22 15:33 PDT |
	|         | kubernetes-upgrade-20220728152732-12923 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	| start   | -p                                      | NoKubernetes-20220728153211-12923       | jenkins | v1.26.0 | 28 Jul 22 15:32 PDT | 28 Jul 22 15:32 PDT |
	|         | NoKubernetes-20220728153211-12923       |                                         |         |         |                     |                     |
	|         | --no-kubernetes --driver=docker         |                                         |         |         |                     |                     |
	| delete  | -p                                      | NoKubernetes-20220728153211-12923       | jenkins | v1.26.0 | 28 Jul 22 15:32 PDT | 28 Jul 22 15:32 PDT |
	|         | NoKubernetes-20220728153211-12923       |                                         |         |         |                     |                     |
	| start   | -p                                      | NoKubernetes-20220728153211-12923       | jenkins | v1.26.0 | 28 Jul 22 15:32 PDT | 28 Jul 22 15:33 PDT |
	|         | NoKubernetes-20220728153211-12923       |                                         |         |         |                     |                     |
	|         | --no-kubernetes --driver=docker         |                                         |         |         |                     |                     |
	| ssh     | -p                                      | NoKubernetes-20220728153211-12923       | jenkins | v1.26.0 | 28 Jul 22 15:33 PDT |                     |
	|         | NoKubernetes-20220728153211-12923       |                                         |         |         |                     |                     |
	|         | sudo systemctl is-active --quiet        |                                         |         |         |                     |                     |
	|         | service kubelet                         |                                         |         |         |                     |                     |
	| profile | list                                    | minikube                                | jenkins | v1.26.0 | 28 Jul 22 15:33 PDT | 28 Jul 22 15:33 PDT |
	| profile | list --output=json                      | minikube                                | jenkins | v1.26.0 | 28 Jul 22 15:33 PDT | 28 Jul 22 15:33 PDT |
	| stop    | -p                                      | NoKubernetes-20220728153211-12923       | jenkins | v1.26.0 | 28 Jul 22 15:33 PDT |                     |
	|         | NoKubernetes-20220728153211-12923       |                                         |         |         |                     |                     |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 15:32:58
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 15:32:58.250701   26149 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:32:58.250926   26149 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:32:58.250928   26149 out.go:309] Setting ErrFile to fd 2...
	I0728 15:32:58.250931   26149 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:32:58.251071   26149 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:32:58.251548   26149 out.go:303] Setting JSON to false
	I0728 15:32:58.267035   26149 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8620,"bootTime":1659038958,"procs":352,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:32:58.267140   26149 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:32:58.288885   26149 out.go:177] * [NoKubernetes-20220728153211-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:32:58.310009   26149 notify.go:193] Checking for updates...
	I0728 15:32:58.331858   26149 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:32:58.354131   26149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:32:58.376168   26149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:32:58.397796   26149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:32:58.419143   26149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:32:58.441692   26149 config.go:178] Loaded profile config "kubernetes-upgrade-20220728152732-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:32:58.441742   26149 start.go:1658] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0728 15:32:58.441784   26149 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:32:58.510616   26149 docker.go:137] docker version: linux-20.10.17
	I0728 15:32:58.510758   26149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:32:58.640690   26149 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-28 22:32:58.575433392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:32:58.662319   26149 out.go:177] * Using the docker driver based on user configuration
	I0728 15:32:54.072170   25949 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57740/healthz ...
	I0728 15:32:54.073561   25949 api_server.go:256] stopped: https://127.0.0.1:57740/healthz: Get "https://127.0.0.1:57740/healthz": EOF
	I0728 15:32:54.073589   25949 retry.go:31] will retry after 1.201160326s: state is "Stopped"
	I0728 15:32:55.274808   25949 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57740/healthz ...
	I0728 15:32:55.276184   25949 api_server.go:256] stopped: https://127.0.0.1:57740/healthz: Get "https://127.0.0.1:57740/healthz": EOF
	I0728 15:32:55.276204   25949 retry.go:31] will retry after 1.723796097s: state is "Stopped"
	I0728 15:32:57.002199   25949 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57740/healthz ...
	I0728 15:32:57.004515   25949 api_server.go:256] stopped: https://127.0.0.1:57740/healthz: Get "https://127.0.0.1:57740/healthz": EOF
	I0728 15:32:57.004541   25949 retry.go:31] will retry after 1.596532639s: state is "Stopped"
	I0728 15:32:58.602451   25949 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57740/healthz ...
	I0728 15:32:58.603856   25949 api_server.go:256] stopped: https://127.0.0.1:57740/healthz: Get "https://127.0.0.1:57740/healthz": EOF
	I0728 15:32:58.603875   25949 retry.go:31] will retry after 2.189373114s: state is "Stopped"
	I0728 15:32:58.683144   26149 start.go:284] selected driver: docker
	I0728 15:32:58.683182   26149 start.go:808] validating driver "docker" against <nil>
	I0728 15:32:58.683209   26149 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:32:58.683341   26149 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:32:58.830443   26149 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-28 22:32:58.748181338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:32:58.830569   26149 start.go:1658] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0728 15:32:58.830581   26149 start.go:1658] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0728 15:32:58.830598   26149 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0728 15:32:58.832657   26149 start_flags.go:377] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0728 15:32:58.832771   26149 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 15:32:58.854270   26149 out.go:177] * Using Docker Desktop driver with root privileges
	I0728 15:32:58.875583   26149 cni.go:95] Creating CNI manager for ""
	I0728 15:32:58.875605   26149 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:32:58.875649   26149 start_flags.go:310] config:
	{Name:NoKubernetes-20220728153211-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:NoKubernetes-20220728153211-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:32:58.875756   26149 start.go:1658] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0728 15:32:58.918348   26149 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-20220728153211-12923
	I0728 15:32:58.939696   26149 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:32:58.961433   26149 out.go:177] * Pulling base image ...
	I0728 15:32:59.003678   26149 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	I0728 15:32:59.003759   26149 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:32:59.067201   26149 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:32:59.067218   26149 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	W0728 15:32:59.073647   26149 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0728 15:32:59.073752   26149 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/NoKubernetes-20220728153211-12923/config.json ...
	I0728 15:32:59.073779   26149 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/NoKubernetes-20220728153211-12923/config.json: {Name:mk32550197e60ad80f1c981c958d03dad5dcb851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:32:59.074025   26149 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:32:59.074051   26149 start.go:370] acquiring machines lock for NoKubernetes-20220728153211-12923: {Name:mkee877efffb0675bcc3db81adc3d45cf4b0df44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:32:59.074085   26149 start.go:374] acquired machines lock for "NoKubernetes-20220728153211-12923" in 28.97µs
	I0728 15:32:59.074105   26149 start.go:92] Provisioning new machine with config: &{Name:NoKubernetes-20220728153211-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-20220728153211-12923 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{
Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 15:32:59.074152   26149 start.go:132] createHost starting for "" (driver="docker")
	I0728 15:32:59.095887   26149 out.go:204] * Creating docker container (CPUs=2, Memory=5895MB) ...
	I0728 15:32:59.096275   26149 start.go:166] libmachine.API.Create for "NoKubernetes-20220728153211-12923" (driver="docker")
	I0728 15:32:59.096322   26149 client.go:168] LocalClient.Create starting
	I0728 15:32:59.096456   26149 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem
	I0728 15:32:59.096520   26149 main.go:134] libmachine: Decoding PEM data...
	I0728 15:32:59.096546   26149 main.go:134] libmachine: Parsing certificate...
	I0728 15:32:59.096652   26149 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem
	I0728 15:32:59.096695   26149 main.go:134] libmachine: Decoding PEM data...
	I0728 15:32:59.096709   26149 main.go:134] libmachine: Parsing certificate...
	I0728 15:32:59.118043   26149 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220728153211-12923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0728 15:32:59.181448   26149 cli_runner.go:211] docker network inspect NoKubernetes-20220728153211-12923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0728 15:32:59.181526   26149 network_create.go:272] running [docker network inspect NoKubernetes-20220728153211-12923] to gather additional debugging logs...
	I0728 15:32:59.181553   26149 cli_runner.go:164] Run: docker network inspect NoKubernetes-20220728153211-12923
	W0728 15:32:59.243797   26149 cli_runner.go:211] docker network inspect NoKubernetes-20220728153211-12923 returned with exit code 1
	I0728 15:32:59.243816   26149 network_create.go:275] error running [docker network inspect NoKubernetes-20220728153211-12923]: docker network inspect NoKubernetes-20220728153211-12923: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: NoKubernetes-20220728153211-12923
	I0728 15:32:59.243831   26149 network_create.go:277] output of [docker network inspect NoKubernetes-20220728153211-12923]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: NoKubernetes-20220728153211-12923
	
	** /stderr **
	I0728 15:32:59.243946   26149 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0728 15:32:59.304920   26149 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b87910] misses:0}
	I0728 15:32:59.304950   26149 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:32:59.304966   26149 network_create.go:115] attempt to create docker network NoKubernetes-20220728153211-12923 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0728 15:32:59.305035   26149 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220728153211-12923 NoKubernetes-20220728153211-12923
	W0728 15:32:59.366121   26149 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220728153211-12923 NoKubernetes-20220728153211-12923 returned with exit code 1
	W0728 15:32:59.366158   26149 network_create.go:107] failed to create docker network NoKubernetes-20220728153211-12923 192.168.49.0/24, will retry: subnet is taken
	I0728 15:32:59.366428   26149 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b87910] amended:false}} dirty:map[] misses:0}
	I0728 15:32:59.366441   26149 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:32:59.366683   26149 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b87910] amended:true}} dirty:map[192.168.49.0:0xc000b87910 192.168.58.0:0xc000b87948] misses:0}
	I0728 15:32:59.366694   26149 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:32:59.366699   26149 network_create.go:115] attempt to create docker network NoKubernetes-20220728153211-12923 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0728 15:32:59.366756   26149 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220728153211-12923 NoKubernetes-20220728153211-12923
	W0728 15:32:59.480978   26149 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220728153211-12923 NoKubernetes-20220728153211-12923 returned with exit code 1
	W0728 15:32:59.481020   26149 network_create.go:107] failed to create docker network NoKubernetes-20220728153211-12923 192.168.58.0/24, will retry: subnet is taken
	I0728 15:32:59.481282   26149 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b87910] amended:true}} dirty:map[192.168.49.0:0xc000b87910 192.168.58.0:0xc000b87948] misses:1}
	I0728 15:32:59.481297   26149 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:32:59.481488   26149 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b87910] amended:true}} dirty:map[192.168.49.0:0xc000b87910 192.168.58.0:0xc000b87948 192.168.67.0:0xc000b87980] misses:1}
	I0728 15:32:59.481530   26149 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:32:59.481538   26149 network_create.go:115] attempt to create docker network NoKubernetes-20220728153211-12923 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0728 15:32:59.481593   26149 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-20220728153211-12923 NoKubernetes-20220728153211-12923
	I0728 15:32:59.576937   26149 network_create.go:99] docker network NoKubernetes-20220728153211-12923 192.168.67.0/24 created
	I0728 15:32:59.576967   26149 kic.go:106] calculated static IP "192.168.67.2" for the "NoKubernetes-20220728153211-12923" container
	I0728 15:32:59.577066   26149 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0728 15:32:59.639929   26149 cli_runner.go:164] Run: docker volume create NoKubernetes-20220728153211-12923 --label name.minikube.sigs.k8s.io=NoKubernetes-20220728153211-12923 --label created_by.minikube.sigs.k8s.io=true
	I0728 15:32:59.702038   26149 oci.go:103] Successfully created a docker volume NoKubernetes-20220728153211-12923
	I0728 15:32:59.702223   26149 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-20220728153211-12923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-20220728153211-12923 --entrypoint /usr/bin/test -v NoKubernetes-20220728153211-12923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0728 15:33:00.143344   26149 oci.go:107] Successfully prepared a docker volume NoKubernetes-20220728153211-12923
	I0728 15:33:00.143441   26149 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	I0728 15:33:00.143557   26149 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0728 15:33:00.275383   26149 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-20220728153211-12923 --name NoKubernetes-20220728153211-12923 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-20220728153211-12923 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-20220728153211-12923 --network NoKubernetes-20220728153211-12923 --ip 192.168.67.2 --volume NoKubernetes-20220728153211-12923:/var --security-opt apparmor=unconfined --memory=5895mb --memory-swap=5895mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0728 15:33:00.649158   26149 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220728153211-12923 --format={{.State.Running}}
	I0728 15:33:00.713964   26149 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220728153211-12923 --format={{.State.Status}}
	I0728 15:33:00.781099   26149 cli_runner.go:164] Run: docker exec NoKubernetes-20220728153211-12923 stat /var/lib/dpkg/alternatives/iptables
	I0728 15:33:00.899518   26149 oci.go:144] the created container "NoKubernetes-20220728153211-12923" has a running status.
	I0728 15:33:00.899559   26149 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/NoKubernetes-20220728153211-12923/id_rsa...
	I0728 15:33:00.990587   26149 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/NoKubernetes-20220728153211-12923/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0728 15:33:01.107860   26149 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220728153211-12923 --format={{.State.Status}}
	I0728 15:33:01.174402   26149 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0728 15:33:01.174415   26149 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-20220728153211-12923 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0728 15:33:01.287763   26149 cli_runner.go:164] Run: docker container inspect NoKubernetes-20220728153211-12923 --format={{.State.Status}}
	I0728 15:33:01.351749   26149 machine.go:88] provisioning docker machine ...
	I0728 15:33:01.351782   26149 ubuntu.go:169] provisioning hostname "NoKubernetes-20220728153211-12923"
	I0728 15:33:01.351856   26149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220728153211-12923
	I0728 15:33:01.415607   26149 main.go:134] libmachine: Using SSH client type: native
	I0728 15:33:01.415801   26149 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57921 <nil> <nil>}
	I0728 15:33:01.415821   26149 main.go:134] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-20220728153211-12923 && echo "NoKubernetes-20220728153211-12923" | sudo tee /etc/hostname
	I0728 15:33:01.540837   26149 main.go:134] libmachine: SSH cmd err, output: <nil>: NoKubernetes-20220728153211-12923
	
	I0728 15:33:01.540908   26149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220728153211-12923
	I0728 15:33:01.604084   26149 main.go:134] libmachine: Using SSH client type: native
	I0728 15:33:01.604276   26149 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57921 <nil> <nil>}
	I0728 15:33:01.604288   26149 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-20220728153211-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-20220728153211-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-20220728153211-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:33:01.720825   26149 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:33:01.720840   26149 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:33:01.720864   26149 ubuntu.go:177] setting up certificates
	I0728 15:33:01.720871   26149 provision.go:83] configureAuth start
	I0728 15:33:01.720926   26149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220728153211-12923
	I0728 15:33:01.787801   26149 provision.go:138] copyHostCerts
	I0728 15:33:01.787884   26149 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:33:01.787890   26149 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:33:01.787993   26149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:33:01.788238   26149 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:33:01.788248   26149 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:33:01.788321   26149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:33:01.788509   26149 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:33:01.788513   26149 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:33:01.788583   26149 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:33:01.788737   26149 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-20220728153211-12923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube NoKubernetes-20220728153211-12923]
	I0728 15:33:01.947971   26149 provision.go:172] copyRemoteCerts
	I0728 15:33:01.948027   26149 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:33:01.948068   26149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220728153211-12923
	I0728 15:33:02.012441   26149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57921 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/NoKubernetes-20220728153211-12923/id_rsa Username:docker}
	I0728 15:33:02.097518   26149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:33:02.116120   26149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1273 bytes)
	I0728 15:33:02.134817   26149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 15:33:02.152376   26149 provision.go:86] duration metric: configureAuth took 431.483622ms
	I0728 15:33:02.152385   26149 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:33:02.152519   26149 config.go:178] Loaded profile config "NoKubernetes-20220728153211-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0728 15:33:02.152584   26149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220728153211-12923
	I0728 15:33:02.217298   26149 main.go:134] libmachine: Using SSH client type: native
	I0728 15:33:02.217443   26149 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57921 <nil> <nil>}
	I0728 15:33:02.217456   26149 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:33:02.338038   26149 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:33:02.338045   26149 ubuntu.go:71] root file system type: overlay
	I0728 15:33:02.338197   26149 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:33:02.338272   26149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220728153211-12923
	I0728 15:33:02.405975   26149 main.go:134] libmachine: Using SSH client type: native
	I0728 15:33:02.406146   26149 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57921 <nil> <nil>}
	I0728 15:33:02.406191   26149 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:33:02.538071   26149 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:33:02.538155   26149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220728153211-12923
	I0728 15:33:02.602755   26149 main.go:134] libmachine: Using SSH client type: native
	I0728 15:33:02.602893   26149 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57921 <nil> <nil>}
	I0728 15:33:02.602904   26149 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:33:03.244711   26149 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 22:33:02.547298032 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0728 15:33:03.244725   26149 machine.go:91] provisioned docker machine in 1.892982421s
	I0728 15:33:03.244729   26149 client.go:171] LocalClient.Create took 4.148458827s
	I0728 15:33:03.244748   26149 start.go:174] duration metric: libmachine.API.Create for "NoKubernetes-20220728153211-12923" took 4.148529591s
	I0728 15:33:03.244756   26149 start.go:307] post-start starting for "NoKubernetes-20220728153211-12923" (driver="docker")
	I0728 15:33:03.244761   26149 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:33:03.244829   26149 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:33:03.244874   26149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220728153211-12923
	I0728 15:33:00.793434   25949 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57740/healthz ...
	I0728 15:33:00.794567   25949 api_server.go:256] stopped: https://127.0.0.1:57740/healthz: Get "https://127.0.0.1:57740/healthz": EOF
	I0728 15:33:00.794591   25949 api_server.go:165] Checking apiserver status ...
	I0728 15:33:00.794660   25949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:33:00.805429   25949 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:33:00.805446   25949 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 15:33:00.805453   25949 kubeadm.go:1092] stopping kube-system containers ...
	I0728 15:33:00.805524   25949 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:33:00.840445   25949 docker.go:443] Stopping containers: [4083a2a63cb1 d07c2b05dce5 79d0d8be9c2e 6b35f226e6d9 13999e8dfd24 00415f6a9b4e dfef98ff98f5 6b538081a645 5370d5d11120 ab5926e499a8 e9a3cdd19608 5a0b64b81b2a 6c1db748e9e0 9a48f541b2ff d7af9c81082a]
	I0728 15:33:00.840519   25949 ssh_runner.go:195] Run: docker stop 4083a2a63cb1 d07c2b05dce5 79d0d8be9c2e 6b35f226e6d9 13999e8dfd24 00415f6a9b4e dfef98ff98f5 6b538081a645 5370d5d11120 ab5926e499a8 e9a3cdd19608 5a0b64b81b2a 6c1db748e9e0 9a48f541b2ff d7af9c81082a
	I0728 15:33:01.020749   25949 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 15:33:01.055567   25949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:33:01.065627   25949 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5759 Jul 28 22:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5799 Jul 28 22:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5963 Jul 28 22:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5743 Jul 28 22:29 /etc/kubernetes/scheduler.conf
	
	I0728 15:33:01.065694   25949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 15:33:01.075487   25949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 15:33:01.084266   25949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 15:33:01.093503   25949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 15:33:01.103426   25949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:33:01.112256   25949 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 15:33:01.112267   25949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:33:01.160044   25949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:33:02.150195   25949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:33:02.349123   25949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:33:02.400776   25949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:33:02.459735   25949 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:33:02.459827   25949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:33:02.988309   25949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:33:03.488373   25949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:33:03.501100   25949 api_server.go:71] duration metric: took 1.041376956s to wait for apiserver process to appear ...
	I0728 15:33:03.501124   25949 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:33:03.501135   25949 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57740/healthz ...
	I0728 15:33:03.313121   26149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57921 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/NoKubernetes-20220728153211-12923/id_rsa Username:docker}
	I0728 15:33:03.402464   26149 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:33:03.407401   26149 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:33:03.407420   26149 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:33:03.407435   26149 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:33:03.407443   26149 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:33:03.407458   26149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:33:03.407638   26149 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:33:03.407860   26149 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:33:03.408509   26149 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:33:03.417487   26149 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:33:03.439946   26149 start.go:310] post-start completed in 195.184369ms
	I0728 15:33:03.440436   26149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220728153211-12923
	I0728 15:33:03.506343   26149 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/NoKubernetes-20220728153211-12923/config.json ...
	I0728 15:33:03.506804   26149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:33:03.506856   26149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220728153211-12923
	I0728 15:33:03.576122   26149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57921 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/NoKubernetes-20220728153211-12923/id_rsa Username:docker}
	I0728 15:33:03.662365   26149 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:33:03.666845   26149 start.go:135] duration metric: createHost completed in 4.592746933s
	I0728 15:33:03.666857   26149 start.go:82] releasing machines lock for "NoKubernetes-20220728153211-12923", held for 4.592822287s
	I0728 15:33:03.666928   26149 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-20220728153211-12923
	I0728 15:33:03.737564   26149 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:33:03.737649   26149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220728153211-12923
	I0728 15:33:03.738200   26149 ssh_runner.go:195] Run: systemctl --version
	I0728 15:33:03.738443   26149 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-20220728153211-12923
	I0728 15:33:03.827899   26149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57921 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/NoKubernetes-20220728153211-12923/id_rsa Username:docker}
	I0728 15:33:03.827913   26149 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57921 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/NoKubernetes-20220728153211-12923/id_rsa Username:docker}
	I0728 15:33:04.098814   26149 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:33:04.108497   26149 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:33:04.108554   26149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:33:04.117891   26149 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:33:04.131167   26149 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:33:04.197703   26149 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:33:04.266357   26149 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:33:04.332895   26149 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:33:04.653175   26149 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:33:04.691216   26149 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:33:04.772106   26149 out.go:204] * Preparing Docker 20.10.17 ...
	I0728 15:33:04.793033   26149 out.go:177] * Done! minikube is ready without Kubernetes!
	I0728 15:33:04.836438   26149 out.go:177] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube docker-env" to point your docker-cli to the docker inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I0728 15:33:06.960760   25949 api_server.go:266] https://127.0.0.1:57740/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 15:33:06.960778   25949 api_server.go:102] status: https://127.0.0.1:57740/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 15:33:07.461897   25949 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57740/healthz ...
	I0728 15:33:07.468160   25949 api_server.go:266] https://127.0.0.1:57740/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:33:07.468182   25949 api_server.go:102] status: https://127.0.0.1:57740/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:33:07.960936   25949 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57740/healthz ...
	I0728 15:33:07.966516   25949 api_server.go:266] https://127.0.0.1:57740/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:33:07.966535   25949 api_server.go:102] status: https://127.0.0.1:57740/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:33:08.463019   25949 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57740/healthz ...
	I0728 15:33:08.470355   25949 api_server.go:266] https://127.0.0.1:57740/healthz returned 200:
	ok
	I0728 15:33:08.476461   25949 api_server.go:140] control plane version: v1.24.3
	I0728 15:33:08.476473   25949 api_server.go:130] duration metric: took 4.975409369s to wait for apiserver health ...
	I0728 15:33:08.476478   25949 cni.go:95] Creating CNI manager for ""
	I0728 15:33:08.476483   25949 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:33:08.476494   25949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:33:08.481696   25949 system_pods.go:59] 4 kube-system pods found
	I0728 15:33:08.481714   25949 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220728152732-12923" [128b54e2-ba9e-4cd0-8c06-d8e374e72fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0728 15:33:08.481721   25949 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220728152732-12923" [15a46252-e453-4837-b407-0342fb5331cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 15:33:08.481730   25949 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220728152732-12923" [28848afd-31be-4345-b98d-37dbdd769309] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 15:33:08.481738   25949 system_pods.go:61] "storage-provisioner" [68ed419a-e45d-4d15-9a4c-7cc4febbfebe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 15:33:08.481742   25949 system_pods.go:74] duration metric: took 5.244346ms to wait for pod list to return data ...
	I0728 15:33:08.481748   25949 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:33:08.484144   25949 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:33:08.484156   25949 node_conditions.go:123] node cpu capacity is 6
	I0728 15:33:08.484165   25949 node_conditions.go:105] duration metric: took 2.41358ms to run NodePressure ...
	I0728 15:33:08.484176   25949 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:33:08.651341   25949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 15:33:08.658666   25949 ops.go:34] apiserver oom_adj: -16
	I0728 15:33:08.658679   25949 kubeadm.go:630] restartCluster took 37.927756834s
	I0728 15:33:08.658688   25949 kubeadm.go:397] StartCluster complete in 38.000135924s
	I0728 15:33:08.658704   25949 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:33:08.658779   25949 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:33:08.659233   25949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:33:08.659856   25949 kapi.go:59] client config for kubernetes-upgrade-20220728152732-12923: &rest.Config{Host:"https://127.0.0.1:57740", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kuber
netes-upgrade-20220728152732-12923/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:33:08.662421   25949 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-20220728152732-12923" rescaled to 1
	I0728 15:33:08.662453   25949 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 15:33:08.703499   25949 out.go:177] * Verifying Kubernetes components...
	I0728 15:33:08.662470   25949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 15:33:08.662493   25949 addons.go:412] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0728 15:33:08.662597   25949 config.go:178] Loaded profile config "kubernetes-upgrade-20220728152732-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:33:08.731700   25949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:33:08.731700   25949 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-20220728152732-12923"
	I0728 15:33:08.731717   25949 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-20220728152732-12923"
	I0728 15:33:08.731722   25949 addons.go:153] Setting addon storage-provisioner=true in "kubernetes-upgrade-20220728152732-12923"
	W0728 15:33:08.731730   25949 addons.go:162] addon storage-provisioner should already be in state true
	I0728 15:33:08.731736   25949 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-20220728152732-12923"
	I0728 15:33:08.731771   25949 host.go:66] Checking if "kubernetes-upgrade-20220728152732-12923" exists ...
	I0728 15:33:08.731996   25949 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220728152732-12923 --format={{.State.Status}}
	I0728 15:33:08.732432   25949 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220728152732-12923 --format={{.State.Status}}
	I0728 15:33:08.766456   25949 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0728 15:33:08.766461   25949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:33:08.831293   25949 kapi.go:59] client config for kubernetes-upgrade-20220728152732-12923: &rest.Config{Host:"https://127.0.0.1:57740", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubernetes-upgrade-20220728152732-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kuber
netes-upgrade-20220728152732-12923/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:33:08.852074   25949 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:33:08.859789   25949 addons.go:153] Setting addon default-storageclass=true in "kubernetes-upgrade-20220728152732-12923"
	W0728 15:33:08.874086   25949 addons.go:162] addon default-storageclass should already be in state true
	I0728 15:33:08.874115   25949 host.go:66] Checking if "kubernetes-upgrade-20220728152732-12923" exists ...
	I0728 15:33:08.874140   25949 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:33:08.874149   25949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 15:33:08.874204   25949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:33:08.875219   25949 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-20220728152732-12923 --format={{.State.Status}}
	I0728 15:33:08.881208   25949 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:33:08.881412   25949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:33:08.892724   25949 api_server.go:71] duration metric: took 230.25558ms to wait for apiserver process to appear ...
	I0728 15:33:08.892757   25949 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:33:08.892772   25949 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57740/healthz ...
	I0728 15:33:08.899948   25949 api_server.go:266] https://127.0.0.1:57740/healthz returned 200:
	ok
	I0728 15:33:08.901351   25949 api_server.go:140] control plane version: v1.24.3
	I0728 15:33:08.901367   25949 api_server.go:130] duration metric: took 8.603296ms to wait for apiserver health ...
	I0728 15:33:08.901378   25949 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:33:08.906164   25949 system_pods.go:59] 4 kube-system pods found
	I0728 15:33:08.906188   25949 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-20220728152732-12923" [128b54e2-ba9e-4cd0-8c06-d8e374e72fd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0728 15:33:08.906196   25949 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-20220728152732-12923" [15a46252-e453-4837-b407-0342fb5331cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 15:33:08.906205   25949 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-20220728152732-12923" [28848afd-31be-4345-b98d-37dbdd769309] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 15:33:08.906225   25949 system_pods.go:61] "storage-provisioner" [68ed419a-e45d-4d15-9a4c-7cc4febbfebe] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 15:33:08.906232   25949 system_pods.go:74] duration metric: took 4.847438ms to wait for pod list to return data ...
	I0728 15:33:08.906241   25949 kubeadm.go:572] duration metric: took 243.77676ms to wait for : map[apiserver:true system_pods:true] ...
	I0728 15:33:08.906251   25949 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:33:08.909206   25949 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:33:08.909228   25949 node_conditions.go:123] node cpu capacity is 6
	I0728 15:33:08.909240   25949 node_conditions.go:105] duration metric: took 2.983922ms to run NodePressure ...
	I0728 15:33:08.909253   25949 start.go:216] waiting for startup goroutines ...
	I0728 15:33:08.948189   25949 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 15:33:08.948203   25949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 15:33:08.948279   25949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-20220728152732-12923
	I0728 15:33:08.948889   25949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57736 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728152732-12923/id_rsa Username:docker}
	I0728 15:33:09.017560   25949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57736 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/kubernetes-upgrade-20220728152732-12923/id_rsa Username:docker}
	I0728 15:33:09.042979   25949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:33:09.116400   25949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 15:33:09.699865   25949 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0728 15:33:09.720678   25949 addons.go:414] enableAddons completed in 1.058208167s
	I0728 15:33:09.751386   25949 start.go:506] kubectl: 1.24.2, cluster: 1.24.3 (minor skew: 0)
	I0728 15:33:09.850727   25949 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-20220728152732-12923" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 22:31:49 UTC, end at Thu 2022-07-28 22:33:11 UTC. --
	Jul 28 22:32:28 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:28.908353597Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 28 22:32:28 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:28.908398692Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 28 22:32:28 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:28.911876253Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 28 22:32:28 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:28.918926977Z" level=info msg="Loading containers: start."
	Jul 28 22:32:28 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:28.971714672Z" level=info msg="ignoring event" container=5370d5d111209015fc2bf948a99d063cf455cddb719536a3b736b9c5013c3c6c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:32:28 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:28.972574956Z" level=info msg="ignoring event" container=ab5926e499a8a6cffc09f1fd35df09fe75eb743c701d21231e476985f6689002 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:29.016486874Z" level=info msg="ignoring event" container=6b538081a64525d086f36b7f15bd6f54759b1150b0d5b0ec38c1241b50bd337a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:29.016543288Z" level=info msg="ignoring event" container=dfef98ff98f5a134bfd8e07d25b6703e2b452a80ec9cdd0483d495ca0ced4683 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:29.180842354Z" level=info msg="Removing stale sandbox 6426341485d1f9539c1f9431bbdeaeffa24c432f37d1a6f4c03a2d4e8f4b97a6 (5370d5d111209015fc2bf948a99d063cf455cddb719536a3b736b9c5013c3c6c)"
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:29.182380151Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 003ec206dfd68879f9fd1ca0baec82bbd475088fce6a26de83c02da11274d4d5 17072c82aa7d6cac92f0999dc308d5fae11b4986990bc6b636ee74c1292d8966], retrying...."
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:29.269085088Z" level=info msg="Removing stale sandbox 9c5896a6eec3dad4b18f0832fdfc5be031e9cbb5bb86a2a05a30e9ae1c7121ed (ab5926e499a8a6cffc09f1fd35df09fe75eb743c701d21231e476985f6689002)"
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:29.270419144Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 003ec206dfd68879f9fd1ca0baec82bbd475088fce6a26de83c02da11274d4d5 a6e060aadb7a8a1ba23b7e34d276986fb704e055a06264cb47d523151e3bbda6], retrying...."
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:29.293503604Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:29.326433464Z" level=info msg="Loading containers: done."
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:29.336121510Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:29.336240902Z" level=info msg="Daemon has completed initialization"
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 systemd[1]: Started Docker Application Container Engine.
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:29.358851101Z" level=info msg="API listen on [::]:2376"
	Jul 28 22:32:29 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:29.361886799Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 28 22:32:50 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:32:50.745845630Z" level=info msg="ignoring event" container=79d0d8be9c2e1248d17bc599e4eb7998b0392749a8cd77a59a91dc71c82165e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:33:00 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:33:00.948992902Z" level=info msg="ignoring event" container=00415f6a9b4e338827a1f213491283443db1061820087973c7e08f8acad8a4c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:33:00 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:33:00.951184154Z" level=info msg="ignoring event" container=6b35f226e6d9e6a4859d1100830e8a9c6c1ad30b6882f09888f063f1443afa84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:33:00 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:33:00.951808014Z" level=info msg="ignoring event" container=d07c2b05dce58d76b1ea16d1ad5493ec508f1622cb65b31740dda3305240ec4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:33:00 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:33:00.952004460Z" level=info msg="ignoring event" container=13999e8dfd24d5524000b614d3421e36839e3c368e8e62998c713280f5be07c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:33:00 kubernetes-upgrade-20220728152732-12923 dockerd[2616]: time="2022-07-28T22:33:00.954479852Z" level=info msg="ignoring event" container=4083a2a63cb15c39fb8076223645bf552a2a9c38f1bb7504b90eba1db655e82c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                     CREATED              STATE               NAME                      ATTEMPT             POD ID
	377d543ae9827       586c112956dfc                                                                             8 seconds ago        Running             kube-controller-manager   3                   68176c9747b18
	d34426f4338fc       3a5aa3a515f5d                                                                             8 seconds ago        Running             kube-scheduler            3                   db012279c5569
	46027ea721c8b       d521dd763e2e3                                                                             8 seconds ago        Running             kube-apiserver            2                   c0c70a4c64795
	5336dfc97c944       aebe758cef4cd                                                                             8 seconds ago        Running             etcd                      0                   5f98b719aceed
	4083a2a63cb15       586c112956dfc                                                                             24 seconds ago       Exited              kube-controller-manager   2                   6b35f226e6d9e
	d07c2b05dce58       3a5aa3a515f5d                                                                             41 seconds ago       Exited              kube-scheduler            2                   13999e8dfd24d
	79d0d8be9c2e1       d521dd763e2e3                                                                             41 seconds ago       Exited              kube-apiserver            1                   00415f6a9b4e3
	e9a3cdd19608e       k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa   About a minute ago   Exited              etcd                      0                   5a0b64b81b2ad
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-20220728152732-12923
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-20220728152732-12923
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 22:32:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-20220728152732-12923
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 22:33:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Jul 2022 22:33:07 +0000   Thu, 28 Jul 2022 22:32:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Jul 2022 22:33:07 +0000   Thu, 28 Jul 2022 22:32:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Jul 2022 22:33:07 +0000   Thu, 28 Jul 2022 22:32:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Jul 2022 22:33:07 +0000   Thu, 28 Jul 2022 22:32:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-20220728152732-12923
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                5f3f41ed-8fba-4ad2-9a50-c0283ec6c371
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---
	  kube-system                 kube-apiserver-kubernetes-upgrade-20220728152732-12923             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-20220728152732-12923    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kube-scheduler-kubernetes-upgrade-20220728152732-12923             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 storage-provisioner                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                550m (9%!)(MISSING)  0 (0%!)(MISSING)
	  memory             0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)  kubelet          Node kubernetes-upgrade-20220728152732-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)  kubelet          Node kubernetes-upgrade-20220728152732-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x7 over 68s)  kubelet          Node kubernetes-upgrade-20220728152732-12923 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           44s                node-controller  Node kubernetes-upgrade-20220728152732-12923 event: Registered Node kubernetes-upgrade-20220728152732-12923 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001439] FS-Cache: O-key=[8] '377f2e0300000000'
	[  +0.001052] FS-Cache: N-cookie c=000000003d1587d4 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001741] FS-Cache: N-cookie d=00000000b3020e27 n=00000000e968409d
	[  +0.001451] FS-Cache: N-key=[8] '377f2e0300000000'
	[  +0.001921] FS-Cache: Duplicate cookie detected
	[  +0.001022] FS-Cache: O-cookie c=00000000fc272a13 [p=00000000a81fea9c fl=226 nc=0 na=1]
	[  +0.001775] FS-Cache: O-cookie d=00000000b3020e27 n=0000000005e7aa76
	[  +0.001457] FS-Cache: O-key=[8] '377f2e0300000000'
	[  +0.001097] FS-Cache: N-cookie c=000000003d1587d4 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001747] FS-Cache: N-cookie d=00000000b3020e27 n=00000000992429b3
	[  +0.001462] FS-Cache: N-key=[8] '377f2e0300000000'
	[  +3.054735] FS-Cache: Duplicate cookie detected
	[  +0.001042] FS-Cache: O-cookie c=00000000d2d2cc51 [p=00000000a81fea9c fl=226 nc=0 na=1]
	[  +0.001759] FS-Cache: O-cookie d=00000000b3020e27 n=0000000030e50417
	[  +0.001442] FS-Cache: O-key=[8] '367f2e0300000000'
	[  +0.001131] FS-Cache: N-cookie c=000000007bcf2158 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001760] FS-Cache: N-cookie d=00000000b3020e27 n=00000000d445df4c
	[  +0.001503] FS-Cache: N-key=[8] '367f2e0300000000'
	[  +0.439912] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=000000000a15bb65 [p=00000000a81fea9c fl=226 nc=0 na=1]
	[  +0.001773] FS-Cache: O-cookie d=00000000b3020e27 n=00000000a4d7a621
	[  +0.001447] FS-Cache: O-key=[8] '3e7f2e0300000000'
	[  +0.001103] FS-Cache: N-cookie c=000000001f485fd0 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001738] FS-Cache: N-cookie d=00000000b3020e27 n=000000000eab18f1
	[  +0.001440] FS-Cache: N-key=[8] '3e7f2e0300000000'
	
	* 
	* ==> etcd [5336dfc97c94] <==
	* {"level":"info","ts":"2022-07-28T22:33:03.449Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-07-28T22:33:03.449Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-28T22:33:03.450Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-28T22:33:03.450Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-07-28T22:33:03.450Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T22:33:03.450Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T22:33:03.493Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.3"}
	{"level":"info","ts":"2022-07-28T22:33:03.493Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.3"}
	{"level":"info","ts":"2022-07-28T22:33:05.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2022-07-28T22:33:05.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-07-28T22:33:05.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-07-28T22:33:05.237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2022-07-28T22:33:05.238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-28T22:33:05.238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2022-07-28T22:33:05.238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2022-07-28T22:33:05.238Z","caller":"etcdserver/server.go:2512","msg":"updating cluster version using v2 API","from":"3.3","to":"3.5"}
	{"level":"info","ts":"2022-07-28T22:33:05.239Z","caller":"membership/cluster.go:576","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.3","to":"3.5"}
	{"level":"info","ts":"2022-07-28T22:33:05.239Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:33:05.239Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-20220728152732-12923 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T22:33:05.239Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:33:05.239Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:33:05.240Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T22:33:05.240Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T22:33:05.241Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-07-28T22:33:05.241Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [e9a3cdd19608] <==
	* 2022-07-28 22:32:09.616372 I | raft: ea7e25599daad906 became follower at term 0
	2022-07-28 22:32:09.616378 I | raft: newRaft ea7e25599daad906 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2022-07-28 22:32:09.616380 I | raft: ea7e25599daad906 became follower at term 1
	2022-07-28 22:32:09.619295 W | auth: simple token is not cryptographically signed
	2022-07-28 22:32:09.620644 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-07-28 22:32:09.621026 I | etcdserver: ea7e25599daad906 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-07-28 22:32:09.621490 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	2022-07-28 22:32:09.622272 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-07-28 22:32:09.622421 I | embed: listening for metrics on http://192.168.76.2:2381
	2022-07-28 22:32:09.622488 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-07-28 22:32:10.017055 I | raft: ea7e25599daad906 is starting a new election at term 1
	2022-07-28 22:32:10.017132 I | raft: ea7e25599daad906 became candidate at term 2
	2022-07-28 22:32:10.017143 I | raft: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	2022-07-28 22:32:10.017157 I | raft: ea7e25599daad906 became leader at term 2
	2022-07-28 22:32:10.017162 I | raft: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2022-07-28 22:32:10.017330 I | etcdserver: setting up the initial cluster version to 3.3
	2022-07-28 22:32:10.018194 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-07-28 22:32:10.018238 I | etcdserver/api: enabled capabilities for version 3.3
	2022-07-28 22:32:10.018261 I | etcdserver: published {Name:kubernetes-upgrade-20220728152732-12923 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2022-07-28 22:32:10.018266 I | embed: ready to serve client requests
	2022-07-28 22:32:10.018317 I | embed: ready to serve client requests
	2022-07-28 22:32:10.019708 I | embed: serving client requests on 127.0.0.1:2379
	2022-07-28 22:32:10.020766 I | embed: serving client requests on 192.168.76.2:2379
	2022-07-28 22:32:27.449124 N | pkg/osutil: received terminated signal, shutting down...
	2022-07-28 22:32:27.451597 I | etcdserver: skipped leadership transfer for single member cluster
	
	* 
	* ==> kernel <==
	*  22:33:12 up 54 min,  0 users,  load average: 2.46, 1.51, 1.01
	Linux kubernetes-upgrade-20220728152732-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [46027ea721c8] <==
	* I0728 22:33:06.971870       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0728 22:33:06.972690       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0728 22:33:06.972731       1 controller.go:85] Starting OpenAPI controller
	I0728 22:33:06.972742       1 controller.go:85] Starting OpenAPI V3 controller
	I0728 22:33:06.972750       1 naming_controller.go:291] Starting NamingConditionController
	I0728 22:33:06.972757       1 establishing_controller.go:76] Starting EstablishingController
	I0728 22:33:06.972764       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0728 22:33:06.972796       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0728 22:33:06.976736       1 controller.go:83] Starting OpenAPI AggregationController
	I0728 22:33:06.992043       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0728 22:33:07.008581       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0728 22:33:07.055531       1 cache.go:39] Caches are synced for autoregister controller
	I0728 22:33:07.055780       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0728 22:33:07.055916       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0728 22:33:07.060990       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0728 22:33:07.061354       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0728 22:33:07.072399       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0728 22:33:07.097821       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0728 22:33:07.744579       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0728 22:33:07.959371       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0728 22:33:08.611870       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0728 22:33:08.618965       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0728 22:33:08.641897       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0728 22:33:08.654028       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0728 22:33:08.659223       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [79d0d8be9c2e] <==
	* I0728 22:32:30.268602       1 server.go:558] external host was not specified, using 192.168.76.2
	I0728 22:32:30.269065       1 server.go:158] Version: v1.24.3
	I0728 22:32:30.269095       1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:32:30.738878       1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
	I0728 22:32:30.739270       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0728 22:32:30.739301       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0728 22:32:30.740442       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0728 22:32:30.740647       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	W0728 22:32:30.743133       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:32:31.742033       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:32:31.743656       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:32:32.743007       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:32:33.230790       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:32:34.309531       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:32:36.188921       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:32:36.923769       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:32:40.829520       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:32:41.169208       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:32:46.474302       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:32:47.714259       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	E0728 22:32:50.721206       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-controller-manager [377d543ae982] <==
	* I0728 22:33:09.559053       1 node_lifecycle_controller.go:505] Controller will reconcile labels.
	I0728 22:33:09.559071       1 controllermanager.go:593] Started "nodelifecycle"
	I0728 22:33:09.559095       1 node_lifecycle_controller.go:539] Starting node controller
	I0728 22:33:09.559123       1 shared_informer.go:255] Waiting for caches to sync for taint
	I0728 22:33:09.714752       1 controllermanager.go:593] Started "namespace"
	I0728 22:33:09.714814       1 namespace_controller.go:200] Starting namespace controller
	I0728 22:33:09.714820       1 shared_informer.go:255] Waiting for caches to sync for namespace
	I0728 22:33:09.757657       1 controllermanager.go:593] Started "deployment"
	I0728 22:33:09.757768       1 deployment_controller.go:153] "Starting controller" controller="deployment"
	I0728 22:33:09.757877       1 shared_informer.go:255] Waiting for caches to sync for deployment
	I0728 22:33:09.807495       1 controllermanager.go:593] Started "csrcleaner"
	I0728 22:33:09.807528       1 cleaner.go:82] Starting CSR cleaner controller
	I0728 22:33:09.857404       1 controllermanager.go:593] Started "pv-protection"
	I0728 22:33:09.857443       1 pv_protection_controller.go:79] Starting PV protection controller
	I0728 22:33:09.857517       1 shared_informer.go:255] Waiting for caches to sync for PV protection
	I0728 22:33:09.906997       1 controllermanager.go:593] Started "endpointslice"
	I0728 22:33:09.907014       1 endpointslice_controller.go:257] Starting endpoint slice controller
	I0728 22:33:09.907050       1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice
	I0728 22:33:09.957090       1 node_lifecycle_controller.go:77] Sending events to api server
	E0728 22:33:09.957127       1 core.go:211] failed to start cloud node lifecycle controller: no cloud provider provided
	W0728 22:33:09.957136       1 controllermanager.go:571] Skipping "cloud-node-lifecycle"
	I0728 22:33:10.007985       1 controllermanager.go:593] Started "clusterrole-aggregation"
	I0728 22:33:10.008050       1 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
	I0728 22:33:10.008096       1 shared_informer.go:255] Waiting for caches to sync for ClusterRoleAggregator
	I0728 22:33:10.057901       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [4083a2a63cb1] <==
	* I0728 22:32:47.861296       1 serving.go:348] Generated self-signed cert in-memory
	I0728 22:32:48.393955       1 controllermanager.go:180] Version: v1.24.3
	I0728 22:32:48.394039       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:32:48.394827       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0728 22:32:48.394831       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0728 22:32:48.394853       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0728 22:32:48.394869       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [d07c2b05dce5] <==
	* W0728 22:32:58.876592       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0728 22:32:58.876632       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://control-plane.minikube.internal:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0728 22:32:58.927545       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0728 22:32:58.927725       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0728 22:32:59.003592       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0728 22:32:59.003768       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0728 22:32:59.103271       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://control-plane.minikube.internal:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0728 22:32:59.103466       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://control-plane.minikube.internal:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0728 22:32:59.149403       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0728 22:32:59.149504       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://control-plane.minikube.internal:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0728 22:32:59.845771       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0728 22:32:59.845813       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://control-plane.minikube.internal:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0728 22:33:00.187082       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0728 22:33:00.187128       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0728 22:33:00.296674       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0728 22:33:00.296721       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0728 22:33:00.311809       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0728 22:33:00.311856       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	W0728 22:33:00.403904       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	E0728 22:33:00.403946       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	I0728 22:33:00.912189       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0728 22:33:00.912266       1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 22:33:00.912269       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0728 22:33:00.912316       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0728 22:33:00.913175       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [d34426f4338f] <==
	* I0728 22:33:04.508881       1 serving.go:348] Generated self-signed cert in-memory
	W0728 22:33:06.998927       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0728 22:33:06.998946       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0728 22:33:06.998952       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0728 22:33:06.998957       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0728 22:33:07.012905       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0728 22:33:07.012956       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:33:07.014253       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0728 22:33:07.014327       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 22:33:07.014757       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0728 22:33:07.014787       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0728 22:33:07.114862       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 22:31:49 UTC, end at Thu 2022-07-28 22:33:13 UTC. --
	Jul 28 22:33:06 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:06.820535    3831 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220728152732-12923\" not found"
	Jul 28 22:33:06 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:06.921552    3831 kubelet.go:2424] "Error getting node" err="node \"kubernetes-upgrade-20220728152732-12923\" not found"
	Jul 28 22:33:07 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: I0728 22:33:07.021992    3831 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 28 22:33:07 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: I0728 22:33:07.022582    3831 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 28 22:33:07 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: I0728 22:33:07.086357    3831 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-20220728152732-12923"
	Jul 28 22:33:07 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: I0728 22:33:07.086452    3831 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-20220728152732-12923"
	Jul 28 22:33:07 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: I0728 22:33:07.448542    3831 apiserver.go:52] "Watching apiserver"
	Jul 28 22:33:07 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: I0728 22:33:07.453117    3831 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 22:33:07 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: I0728 22:33:07.524955    3831 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmp8s\" (UniqueName: \"kubernetes.io/projected/68ed419a-e45d-4d15-9a4c-7cc4febbfebe-kube-api-access-fmp8s\") pod \"storage-provisioner\" (UID: \"68ed419a-e45d-4d15-9a4c-7cc4febbfebe\") " pod="kube-system/storage-provisioner"
	Jul 28 22:33:07 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: I0728 22:33:07.525043    3831 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/68ed419a-e45d-4d15-9a4c-7cc4febbfebe-tmp\") pod \"storage-provisioner\" (UID: \"68ed419a-e45d-4d15-9a4c-7cc4febbfebe\") " pod="kube-system/storage-provisioner"
	Jul 28 22:33:07 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: I0728 22:33:07.525056    3831 reconciler.go:157] "Reconciler: start to sync state"
	Jul 28 22:33:08 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:08.002465    3831 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 28 22:33:08 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:08.002600    3831 projected.go:192] Error preparing data for projected volume kube-api-access-fmp8s for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 28 22:33:08 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:08.002692    3831 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/68ed419a-e45d-4d15-9a4c-7cc4febbfebe-kube-api-access-fmp8s podName:68ed419a-e45d-4d15-9a4c-7cc4febbfebe nodeName:}" failed. No retries permitted until 2022-07-28 22:33:08.502675009 +0000 UTC m=+6.142954902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fmp8s" (UniqueName: "kubernetes.io/projected/68ed419a-e45d-4d15-9a4c-7cc4febbfebe-kube-api-access-fmp8s") pod "storage-provisioner" (UID: "68ed419a-e45d-4d15-9a4c-7cc4febbfebe") : configmap "kube-root-ca.crt" not found
	Jul 28 22:33:08 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:08.532134    3831 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 28 22:33:08 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:08.532175    3831 projected.go:192] Error preparing data for projected volume kube-api-access-fmp8s for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 28 22:33:08 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:08.532233    3831 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/68ed419a-e45d-4d15-9a4c-7cc4febbfebe-kube-api-access-fmp8s podName:68ed419a-e45d-4d15-9a4c-7cc4febbfebe nodeName:}" failed. No retries permitted until 2022-07-28 22:33:09.532220191 +0000 UTC m=+7.172500086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fmp8s" (UniqueName: "kubernetes.io/projected/68ed419a-e45d-4d15-9a4c-7cc4febbfebe-kube-api-access-fmp8s") pod "storage-provisioner" (UID: "68ed419a-e45d-4d15-9a4c-7cc4febbfebe") : configmap "kube-root-ca.crt" not found
	Jul 28 22:33:08 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: I0728 22:33:08.552021    3831 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f6e4cde585ce5be4b155b6b586fb23b2 path="/var/lib/kubelet/pods/f6e4cde585ce5be4b155b6b586fb23b2/volumes"
	Jul 28 22:33:08 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: I0728 22:33:08.552266    3831 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/f6e4cde585ce5be4b155b6b586fb23b2/volumes"
	Jul 28 22:33:09 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:09.538937    3831 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 28 22:33:09 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:09.538984    3831 projected.go:192] Error preparing data for projected volume kube-api-access-fmp8s for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 28 22:33:09 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:09.539045    3831 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/68ed419a-e45d-4d15-9a4c-7cc4febbfebe-kube-api-access-fmp8s podName:68ed419a-e45d-4d15-9a4c-7cc4febbfebe nodeName:}" failed. No retries permitted until 2022-07-28 22:33:11.539031734 +0000 UTC m=+9.179311627 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fmp8s" (UniqueName: "kubernetes.io/projected/68ed419a-e45d-4d15-9a4c-7cc4febbfebe-kube-api-access-fmp8s") pod "storage-provisioner" (UID: "68ed419a-e45d-4d15-9a4c-7cc4febbfebe") : configmap "kube-root-ca.crt" not found
	Jul 28 22:33:11 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:11.557435    3831 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 28 22:33:11 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:11.557515    3831 projected.go:192] Error preparing data for projected volume kube-api-access-fmp8s for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 28 22:33:11 kubernetes-upgrade-20220728152732-12923 kubelet[3831]: E0728 22:33:11.557583    3831 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/68ed419a-e45d-4d15-9a4c-7cc4febbfebe-kube-api-access-fmp8s podName:68ed419a-e45d-4d15-9a4c-7cc4febbfebe nodeName:}" failed. No retries permitted until 2022-07-28 22:33:15.55756201 +0000 UTC m=+13.197841919 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fmp8s" (UniqueName: "kubernetes.io/projected/68ed419a-e45d-4d15-9a4c-7cc4febbfebe-kube-api-access-fmp8s") pod "storage-provisioner" (UID: "68ed419a-e45d-4d15-9a4c-7cc4febbfebe") : configmap "kube-root-ca.crt" not found
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-20220728152732-12923 -n kubernetes-upgrade-20220728152732-12923
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-20220728152732-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context kubernetes-upgrade-20220728152732-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (1.63965093s)
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-20220728152732-12923 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-20220728152732-12923 describe pod storage-provisioner: exit status 1 (47.423307ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-20220728152732-12923 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220728152732-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220728152732-12923

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220728152732-12923: (2.930939891s)
--- FAIL: TestKubernetesUpgrade (346.65s)

                                                
                                    
x
+
TestMissingContainerUpgrade (48.5s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.753295604.exe start -p missing-upgrade-20220728152643-12923 --memory=2200 --driver=docker 
E0728 15:27:13.987138   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.753295604.exe start -p missing-upgrade-20220728152643-12923 --memory=2200 --driver=docker : exit status 78 (34.400783607s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220728152643-12923] minikube v1.9.1 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-20220728152643-12923
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-20220728152643-12923" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.27 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 37.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 58.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 80.61 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 102.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 123.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 145.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 168.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 191.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 212.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 234.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 256.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 278.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 300.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 322.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 388.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 405.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 428.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 449.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 471.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 494.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 515.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 537.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 22:26:59.994373208 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-20220728152643-12923" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 22:27:16.469372169 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.753295604.exe start -p missing-upgrade-20220728152643-12923 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.753295604.exe start -p missing-upgrade-20220728152643-12923 --memory=2200 --driver=docker : exit status 70 (3.999307934s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220728152643-12923] minikube v1.9.1 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220728152643-12923
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220728152643-12923" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.753295604.exe start -p missing-upgrade-20220728152643-12923 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.1.753295604.exe start -p missing-upgrade-20220728152643-12923 --memory=2200 --driver=docker : exit status 70 (4.058705315s)

                                                
                                                
-- stdout --
	* [missing-upgrade-20220728152643-12923] minikube v1.9.1 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-20220728152643-12923
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-20220728152643-12923" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:482: *** TestMissingContainerUpgrade FAILED at 2022-07-28 15:27:29.091354 -0700 PDT m=+2963.303948467
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-20220728152643-12923
helpers_test.go:235: (dbg) docker inspect missing-upgrade-20220728152643-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ba6c99c8bc49e5ad5ceb620f5dadc837271dba3b93b26a14c9461822c0284b5f",
	        "Created": "2022-07-28T22:27:08.219034733Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 149563,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:27:08.437959033Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/ba6c99c8bc49e5ad5ceb620f5dadc837271dba3b93b26a14c9461822c0284b5f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ba6c99c8bc49e5ad5ceb620f5dadc837271dba3b93b26a14c9461822c0284b5f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ba6c99c8bc49e5ad5ceb620f5dadc837271dba3b93b26a14c9461822c0284b5f/hosts",
	        "LogPath": "/var/lib/docker/containers/ba6c99c8bc49e5ad5ceb620f5dadc837271dba3b93b26a14c9461822c0284b5f/ba6c99c8bc49e5ad5ceb620f5dadc837271dba3b93b26a14c9461822c0284b5f-json.log",
	        "Name": "/missing-upgrade-20220728152643-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-20220728152643-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5f0d673135c84796c0ebebc33b9d0716bf2772537259e817929af463a7aae6e0-init/diff:/var/lib/docker/overlay2/75527f61fe1c74577086f5a0cce2297dfb25dcdc876ab959e46b3c61d79d16c5/diff:/var/lib/docker/overlay2/adf4f675ee1a5c7a0969326b1988b69ad8f94d94b8aa317f8a5f101daa01aec5/diff:/var/lib/docker/overlay2/91e9bfe5724a133e2602657b8314695cd642cc22788aa678e9a6fe26f41ac3a8/diff:/var/lib/docker/overlay2/64a5dc4c5c6e0c2ca57fc7d1f3f7b8f1ec960c60ad9cd53a819f887cf1458915/diff:/var/lib/docker/overlay2/e5c8dcadcad8ff90ad268a64ba4e60117231e19a86932b04a7db1fa024be3c86/diff:/var/lib/docker/overlay2/65b47a70bf5876042eca4ed9dbc08728657e160283bc801943c3fffbb340ce0f/diff:/var/lib/docker/overlay2/a966a60d87c48b193f1052ef2312f5399d2d0c28684d527a00ef795862ad2f86/diff:/var/lib/docker/overlay2/235ec13a881649eb25b7c6ed7f9cc14d8b2a8d1b5b03a1c6bd306f2f92cc49ac/diff:/var/lib/docker/overlay2/1f606f9ff294f29132a91e84bb0e400600cebc8529c4516ac34de1ddd0b01fd1/diff:/var/lib/docker/overlay2/a9e839
19a13e139fff94bf384f62e1385061b705dee0288aece77716f851d5bd/diff:/var/lib/docker/overlay2/ed5bc9b221d0f65ba5a1c158e59a3afc035d222a70673dd4a7591e1eec96661c/diff:/var/lib/docker/overlay2/23504c6d2bb74a35f1f62b55cc70999531271eb46a68f3de8e5f6fa370afcc92/diff:/var/lib/docker/overlay2/c0c1e1ab226a8f6be7ea1a2155264c5440dde2763c188c87b0f4147c032ed4fa/diff:/var/lib/docker/overlay2/ddceb8ca34e4f17bcd9c8c526da1968bf2370447d391f33bb49272973fff4c3c/diff:/var/lib/docker/overlay2/424bd5c93d5826037ef37255f04c2b8c52c087089e936e51c60aaaffc68a4a94/diff:/var/lib/docker/overlay2/0a96e39a584abac7143d0e741b9d5f13a5e6ba3bfe7ff933be8676e27c598c4e/diff:/var/lib/docker/overlay2/48cea15afbd051f76a5acd27bb40516b3003dbfd1657b8e069101bc0a4117e42/diff:/var/lib/docker/overlay2/f778ce187c19d815f174c37c9c067e8207c9cae92d061ef64bbcb50b849a7f06/diff:/var/lib/docker/overlay2/c48eb9a685ac16678a24297c706c32ec213cde3512c075e867634c6845eccd91/diff:/var/lib/docker/overlay2/6c37355a10c9b0a71d6151bb7a607d35a3857290c5478e89a1b3eb771ebf9e27/diff:/var/lib/d
ocker/overlay2/28fbf0eab797492cf3c07b0822193d73a7d34cda40c5c234466eb199c3bdbd0a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f0d673135c84796c0ebebc33b9d0716bf2772537259e817929af463a7aae6e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f0d673135c84796c0ebebc33b9d0716bf2772537259e817929af463a7aae6e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f0d673135c84796c0ebebc33b9d0716bf2772537259e817929af463a7aae6e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-20220728152643-12923",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-20220728152643-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-20220728152643-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-20220728152643-12923",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-20220728152643-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bf8b4150934e0f2f772b8bfe9654605c183bebc473223991bcaef0581d7720d0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57480"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57481"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57482"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bf8b4150934e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "ac935df99a1d5b49afa4f4c5a70e1451b4ac00357e8e32caceee180f041e7dc3",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "76531f69b84c5a1c7848fda07e174d7f5bc8f08be4ad7f1191b14db3dc0aeb08",
	                    "EndpointID": "ac935df99a1d5b49afa4f4c5a70e1451b4ac00357e8e32caceee180f041e7dc3",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220728152643-12923 -n missing-upgrade-20220728152643-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-20220728152643-12923 -n missing-upgrade-20220728152643-12923: exit status 6 (406.139227ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:27:29.553427   24694 status.go:413] kubeconfig endpoint: extract IP: "missing-upgrade-20220728152643-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-20220728152643-12923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-20220728152643-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220728152643-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220728152643-12923: (2.442032783s)
--- FAIL: TestMissingContainerUpgrade (48.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (44.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1697733972.exe start -p stopped-upgrade-20220728152857-12923 --memory=2200 --vm-driver=docker 
E0728 15:29:26.715658   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1697733972.exe start -p stopped-upgrade-20220728152857-12923 --memory=2200 --vm-driver=docker : exit status 70 (34.019292341s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220728152857-12923] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig685889727
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 22:29:13.644594342 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-20220728152857-12923" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 22:29:29.618271225 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-20220728152857-12923", then "minikube start -p stopped-upgrade-20220728152857-12923 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 5.95 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 28.30 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.28 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 72.30 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 94.28 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 116.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 138.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 180.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 202.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 223.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 267.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 288.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 310.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 333.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 355.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 377.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 399.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 421.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 443.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 465.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 486.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 508.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 530.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 22:29:29.618271225 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1697733972.exe start -p stopped-upgrade-20220728152857-12923 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1697733972.exe start -p stopped-upgrade-20220728152857-12923 --memory=2200 --vm-driver=docker : exit status 70 (3.325093944s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220728152857-12923] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig1732022045
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220728152857-12923" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1697733972.exe start -p stopped-upgrade-20220728152857-12923 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.9.0.1697733972.exe start -p stopped-upgrade-20220728152857-12923 --memory=2200 --vm-driver=docker : exit status 70 (4.54941643s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220728152857-12923] minikube v1.9.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	  - KUBECONFIG=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/legacy_kubeconfig475871396
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-20220728152857-12923" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (44.54s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (61.68s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220728152948-12923 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220728152948-12923 --output=json --layout=cluster: exit status 2 (16.07761175s)

                                                
                                                
-- stdout --
	{"Name":"pause-20220728152948-12923","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220728152948-12923","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
pause_test.go:200: incorrect status code: 405
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/VerifyStatus]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220728152948-12923
helpers_test.go:235: (dbg) docker inspect pause-20220728152948-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "622bb8ea015304738badbb94d46965b0e0c42acef533b2903431c3fec02e8650",
	        "Created": "2022-07-28T22:29:55.59764756Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 160447,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:29:55.878237376Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/622bb8ea015304738badbb94d46965b0e0c42acef533b2903431c3fec02e8650/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/622bb8ea015304738badbb94d46965b0e0c42acef533b2903431c3fec02e8650/hostname",
	        "HostsPath": "/var/lib/docker/containers/622bb8ea015304738badbb94d46965b0e0c42acef533b2903431c3fec02e8650/hosts",
	        "LogPath": "/var/lib/docker/containers/622bb8ea015304738badbb94d46965b0e0c42acef533b2903431c3fec02e8650/622bb8ea015304738badbb94d46965b0e0c42acef533b2903431c3fec02e8650-json.log",
	        "Name": "/pause-20220728152948-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20220728152948-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220728152948-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2147483648,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/20e9afd384840ecaabfabb8b56b383c70301ff6b4be20479f25029d8516965ce-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20e9afd384840ecaabfabb8b56b383c70301ff6b4be20479f25029d8516965ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20e9afd384840ecaabfabb8b56b383c70301ff6b4be20479f25029d8516965ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20e9afd384840ecaabfabb8b56b383c70301ff6b4be20479f25029d8516965ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20220728152948-12923",
	                "Source": "/var/lib/docker/volumes/pause-20220728152948-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220728152948-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220728152948-12923",
	                "name.minikube.sigs.k8s.io": "pause-20220728152948-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "45701f3756d20299f282beab4fd84a8d22aee2faa0871e3be7e4923f3061697d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57651"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57652"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57648"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57649"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "57650"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/45701f3756d2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220728152948-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "622bb8ea0153",
	                        "pause-20220728152948-12923"
	                    ],
	                    "NetworkID": "c924fc37034004eb3beb0bb4ab189f67ead63ed926845ed9af797ca68e8ff54c",
	                    "EndpointID": "cda037c818546ace19865bb6d586c71a28eab3b102186db3d0f2af61ce30e04d",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220728152948-12923 -n pause-20220728152948-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-20220728152948-12923 -n pause-20220728152948-12923: exit status 2 (16.080821445s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p pause-20220728152948-12923 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/VerifyStatus
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p pause-20220728152948-12923 logs -n 25: (13.325436786s)
helpers_test.go:252: TestPause/serial/VerifyStatus logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                      | force-systemd-env-20220728152357-12923  | jenkins | v1.26.0 | 28 Jul 22 15:23 PDT | 28 Jul 22 15:24 PDT |
	|         | force-systemd-env-20220728152357-12923  |                                         |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr -v=5    |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| delete  | -p                                      | offline-docker-20220728152330-12923     | jenkins | v1.26.0 | 28 Jul 22 15:24 PDT | 28 Jul 22 15:24 PDT |
	|         | offline-docker-20220728152330-12923     |                                         |         |         |                     |                     |
	| start   | -p                                      | force-systemd-flag-20220728152420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:24 PDT | 28 Jul 22 15:24 PDT |
	|         | force-systemd-flag-20220728152420-12923 |                                         |         |         |                     |                     |
	|         | --memory=2048 --force-systemd           |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=5 --driver=docker  |                                         |         |         |                     |                     |
	| ssh     | force-systemd-env-20220728152357-12923  | force-systemd-env-20220728152357-12923  | jenkins | v1.26.0 | 28 Jul 22 15:24 PDT | 28 Jul 22 15:24 PDT |
	|         | ssh docker info --format                |                                         |         |         |                     |                     |
	|         | {{.CgroupDriver}}                       |                                         |         |         |                     |                     |
	| delete  | -p                                      | force-systemd-env-20220728152357-12923  | jenkins | v1.26.0 | 28 Jul 22 15:24 PDT | 28 Jul 22 15:24 PDT |
	|         | force-systemd-env-20220728152357-12923  |                                         |         |         |                     |                     |
	| start   | -p                                      | docker-flags-20220728152429-12923       | jenkins | v1.26.0 | 28 Jul 22 15:24 PDT | 28 Jul 22 15:25 PDT |
	|         | docker-flags-20220728152429-12923       |                                         |         |         |                     |                     |
	|         | --cache-images=false                    |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --install-addons=false                  |                                         |         |         |                     |                     |
	|         | --wait=false --docker-env=FOO=BAR       |                                         |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                    |                                         |         |         |                     |                     |
	|         | --docker-opt=debug                      |                                         |         |         |                     |                     |
	|         | --docker-opt=icc=true                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | force-systemd-flag-20220728152420-12923 | force-systemd-flag-20220728152420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:24 PDT | 28 Jul 22 15:24 PDT |
	|         | ssh docker info --format                |                                         |         |         |                     |                     |
	|         | {{.CgroupDriver}}                       |                                         |         |         |                     |                     |
	| delete  | -p                                      | force-systemd-flag-20220728152420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:24 PDT | 28 Jul 22 15:24 PDT |
	|         | force-systemd-flag-20220728152420-12923 |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-expiration-20220728152452-12923    | jenkins | v1.26.0 | 28 Jul 22 15:24 PDT | 28 Jul 22 15:25 PDT |
	|         | cert-expiration-20220728152452-12923    |                                         |         |         |                     |                     |
	|         | --memory=2048 --cert-expiration=3m      |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| ssh     | docker-flags-20220728152429-12923       | docker-flags-20220728152429-12923       | jenkins | v1.26.0 | 28 Jul 22 15:25 PDT | 28 Jul 22 15:25 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |         |                     |                     |
	|         | --property=Environment --no-pager       |                                         |         |         |                     |                     |
	| ssh     | docker-flags-20220728152429-12923       | docker-flags-20220728152429-12923       | jenkins | v1.26.0 | 28 Jul 22 15:25 PDT | 28 Jul 22 15:25 PDT |
	|         | ssh sudo systemctl show docker          |                                         |         |         |                     |                     |
	|         | --property=ExecStart --no-pager         |                                         |         |         |                     |                     |
	| delete  | -p                                      | docker-flags-20220728152429-12923       | jenkins | v1.26.0 | 28 Jul 22 15:25 PDT | 28 Jul 22 15:25 PDT |
	|         | docker-flags-20220728152429-12923       |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-options-20220728152503-12923       | jenkins | v1.26.0 | 28 Jul 22 15:25 PDT | 28 Jul 22 15:25 PDT |
	|         | cert-options-20220728152503-12923       |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1               |                                         |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15           |                                         |         |         |                     |                     |
	|         | --apiserver-names=localhost             |                                         |         |         |                     |                     |
	|         | --apiserver-names=www.google.com        |                                         |         |         |                     |                     |
	|         | --apiserver-port=8555                   |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	|         | --apiserver-name=localhost              |                                         |         |         |                     |                     |
	| ssh     | cert-options-20220728152503-12923       | cert-options-20220728152503-12923       | jenkins | v1.26.0 | 28 Jul 22 15:25 PDT | 28 Jul 22 15:25 PDT |
	|         | ssh openssl x509 -text -noout -in       |                                         |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt   |                                         |         |         |                     |                     |
	| ssh     | -p                                      | cert-options-20220728152503-12923       | jenkins | v1.26.0 | 28 Jul 22 15:25 PDT | 28 Jul 22 15:25 PDT |
	|         | cert-options-20220728152503-12923       |                                         |         |         |                     |                     |
	|         | -- sudo cat                             |                                         |         |         |                     |                     |
	|         | /etc/kubernetes/admin.conf              |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-options-20220728152503-12923       | jenkins | v1.26.0 | 28 Jul 22 15:25 PDT | 28 Jul 22 15:25 PDT |
	|         | cert-options-20220728152503-12923       |                                         |         |         |                     |                     |
	| delete  | -p                                      | running-upgrade-20220728152536-12923    | jenkins | v1.26.0 | 28 Jul 22 15:26 PDT | 28 Jul 22 15:26 PDT |
	|         | running-upgrade-20220728152536-12923    |                                         |         |         |                     |                     |
	| delete  | -p                                      | missing-upgrade-20220728152643-12923    | jenkins | v1.26.0 | 28 Jul 22 15:27 PDT | 28 Jul 22 15:27 PDT |
	|         | missing-upgrade-20220728152643-12923    |                                         |         |         |                     |                     |
	| start   | -p                                      | kubernetes-upgrade-20220728152732-12923 | jenkins | v1.26.0 | 28 Jul 22 15:27 PDT |                     |
	|         | kubernetes-upgrade-20220728152732-12923 |                                         |         |         |                     |                     |
	|         | --memory=2200                           |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0            |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                     |                     |
	| start   | -p                                      | cert-expiration-20220728152452-12923    | jenkins | v1.26.0 | 28 Jul 22 15:28 PDT | 28 Jul 22 15:28 PDT |
	|         | cert-expiration-20220728152452-12923    |                                         |         |         |                     |                     |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --cert-expiration=8760h                 |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| delete  | -p                                      | cert-expiration-20220728152452-12923    | jenkins | v1.26.0 | 28 Jul 22 15:28 PDT | 28 Jul 22 15:28 PDT |
	|         | cert-expiration-20220728152452-12923    |                                         |         |         |                     |                     |
	| delete  | -p                                      | stopped-upgrade-20220728152857-12923    | jenkins | v1.26.0 | 28 Jul 22 15:29 PDT | 28 Jul 22 15:29 PDT |
	|         | stopped-upgrade-20220728152857-12923    |                                         |         |         |                     |                     |
	| start   | -p pause-20220728152948-12923           | pause-20220728152948-12923              | jenkins | v1.26.0 | 28 Jul 22 15:29 PDT | 28 Jul 22 15:30 PDT |
	|         | --memory=2048                           |                                         |         |         |                     |                     |
	|         | --install-addons=false                  |                                         |         |         |                     |                     |
	|         | --wait=all --driver=docker              |                                         |         |         |                     |                     |
	| start   | -p pause-20220728152948-12923           | pause-20220728152948-12923              | jenkins | v1.26.0 | 28 Jul 22 15:30 PDT | 28 Jul 22 15:31 PDT |
	|         | --alsologtostderr -v=1                  |                                         |         |         |                     |                     |
	|         | --driver=docker                         |                                         |         |         |                     |                     |
	| pause   | -p pause-20220728152948-12923           | pause-20220728152948-12923              | jenkins | v1.26.0 | 28 Jul 22 15:31 PDT | 28 Jul 22 15:31 PDT |
	|         | --alsologtostderr -v=5                  |                                         |         |         |                     |                     |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 15:30:31
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 15:30:31.788682   25504 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:30:31.788826   25504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:30:31.788831   25504 out.go:309] Setting ErrFile to fd 2...
	I0728 15:30:31.788835   25504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:30:31.788945   25504 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:30:31.789410   25504 out.go:303] Setting JSON to false
	I0728 15:30:31.804628   25504 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8473,"bootTime":1659038958,"procs":366,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:30:31.804714   25504 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:30:31.825871   25504 out.go:177] * [pause-20220728152948-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:30:31.847370   25504 notify.go:193] Checking for updates...
	I0728 15:30:31.869059   25504 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:30:31.890795   25504 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:30:31.912289   25504 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:30:31.934281   25504 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:30:31.955852   25504 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:30:31.977741   25504 config.go:178] Loaded profile config "pause-20220728152948-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:30:31.978404   25504 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:30:32.048182   25504 docker.go:137] docker version: linux-20.10.17
	I0728 15:30:32.048340   25504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:30:32.180559   25504 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:false NGoroutines:56 SystemTime:2022-07-28 22:30:32.115970269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:30:32.202385   25504 out.go:177] * Using the docker driver based on existing profile
	I0728 15:30:32.224293   25504 start.go:284] selected driver: docker
	I0728 15:30:32.224319   25504 start.go:808] validating driver "docker" against &{Name:pause-20220728152948-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:pause-20220728152948-12923 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:30:32.224442   25504 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:30:32.224615   25504 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:30:32.357266   25504 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:74 OomKillDisable:false NGoroutines:56 SystemTime:2022-07-28 22:30:32.293406693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:30:32.360669   25504 cni.go:95] Creating CNI manager for ""
	I0728 15:30:32.360690   25504 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:30:32.360700   25504 start_flags.go:310] config:
	{Name:pause-20220728152948-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:pause-20220728152948-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:30:32.402847   25504 out.go:177] * Starting control plane node pause-20220728152948-12923 in cluster pause-20220728152948-12923
	I0728 15:30:32.424119   25504 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:30:32.445095   25504 out.go:177] * Pulling base image ...
	I0728 15:30:32.487157   25504 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:30:32.487239   25504 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 15:30:32.487243   25504 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:30:32.487266   25504 cache.go:57] Caching tarball of preloaded images
	I0728 15:30:32.487452   25504 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:30:32.487470   25504 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 15:30:32.488481   25504 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/config.json ...
	I0728 15:30:32.550726   25504 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:30:32.550755   25504 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:30:32.550766   25504 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:30:32.550822   25504 start.go:370] acquiring machines lock for pause-20220728152948-12923: {Name:mk441e8b32ec1d0001d2dcdb4419ed2e0ec9443d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:30:32.550907   25504 start.go:374] acquired machines lock for "pause-20220728152948-12923" in 66.535µs
	I0728 15:30:32.550932   25504 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:30:32.550941   25504 fix.go:55] fixHost starting: 
	I0728 15:30:32.551186   25504 cli_runner.go:164] Run: docker container inspect pause-20220728152948-12923 --format={{.State.Status}}
	I0728 15:30:32.615552   25504 fix.go:103] recreateIfNeeded on pause-20220728152948-12923: state=Running err=<nil>
	W0728 15:30:32.615628   25504 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:30:32.637463   25504 out.go:177] * Updating the running docker "pause-20220728152948-12923" container ...
	I0728 15:30:32.679163   25504 machine.go:88] provisioning docker machine ...
	I0728 15:30:32.679193   25504 ubuntu.go:169] provisioning hostname "pause-20220728152948-12923"
	I0728 15:30:32.679274   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:32.744563   25504 main.go:134] libmachine: Using SSH client type: native
	I0728 15:30:32.744778   25504 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57651 <nil> <nil>}
	I0728 15:30:32.744796   25504 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-20220728152948-12923 && echo "pause-20220728152948-12923" | sudo tee /etc/hostname
	I0728 15:30:32.874886   25504 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220728152948-12923
	
	I0728 15:30:32.874978   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:32.939562   25504 main.go:134] libmachine: Using SSH client type: native
	I0728 15:30:32.939709   25504 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57651 <nil> <nil>}
	I0728 15:30:32.939724   25504 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20220728152948-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220728152948-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20220728152948-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:30:33.059860   25504 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:30:33.059882   25504 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:30:33.059903   25504 ubuntu.go:177] setting up certificates
	I0728 15:30:33.059914   25504 provision.go:83] configureAuth start
	I0728 15:30:33.059999   25504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220728152948-12923
	I0728 15:30:33.125734   25504 provision.go:138] copyHostCerts
	I0728 15:30:33.125828   25504 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:30:33.125837   25504 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:30:33.125945   25504 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:30:33.126142   25504 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:30:33.126151   25504 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:30:33.126209   25504 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:30:33.126376   25504 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:30:33.126382   25504 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:30:33.126437   25504 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:30:33.126558   25504 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.pause-20220728152948-12923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-20220728152948-12923]
	I0728 15:30:33.250364   25504 provision.go:172] copyRemoteCerts
	I0728 15:30:33.250450   25504 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:30:33.250495   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:33.314518   25504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57651 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728152948-12923/id_rsa Username:docker}
	I0728 15:30:33.402298   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:30:33.419811   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0728 15:30:33.436731   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 15:30:33.454378   25504 provision.go:86] duration metric: configureAuth took 394.451818ms
	I0728 15:30:33.454391   25504 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:30:33.454532   25504 config.go:178] Loaded profile config "pause-20220728152948-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:30:33.454590   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:33.519557   25504 main.go:134] libmachine: Using SSH client type: native
	I0728 15:30:33.519710   25504 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57651 <nil> <nil>}
	I0728 15:30:33.519721   25504 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:30:33.641727   25504 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:30:33.641739   25504 ubuntu.go:71] root file system type: overlay
	I0728 15:30:33.641861   25504 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:30:33.641926   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:33.706779   25504 main.go:134] libmachine: Using SSH client type: native
	I0728 15:30:33.706958   25504 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57651 <nil> <nil>}
	I0728 15:30:33.707008   25504 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:30:33.837768   25504 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:30:33.837859   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:33.903227   25504 main.go:134] libmachine: Using SSH client type: native
	I0728 15:30:33.903372   25504 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 57651 <nil> <nil>}
	I0728 15:30:33.903422   25504 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:30:34.028250   25504 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:30:34.028277   25504 machine.go:91] provisioned docker machine in 1.34911962s
	I0728 15:30:34.028289   25504 start.go:307] post-start starting for "pause-20220728152948-12923" (driver="docker")
	I0728 15:30:34.028295   25504 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:30:34.028370   25504 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:30:34.028427   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:34.093706   25504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57651 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728152948-12923/id_rsa Username:docker}
	I0728 15:30:34.183067   25504 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:30:34.186350   25504 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:30:34.186363   25504 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:30:34.186369   25504 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:30:34.186374   25504 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:30:34.186389   25504 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:30:34.186494   25504 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:30:34.186638   25504 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:30:34.186787   25504 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:30:34.193516   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:30:34.209950   25504 start.go:310] post-start completed in 181.651904ms
	I0728 15:30:34.210034   25504 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:30:34.210083   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:34.273344   25504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57651 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728152948-12923/id_rsa Username:docker}
	I0728 15:30:34.357218   25504 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:30:34.361598   25504 fix.go:57] fixHost completed within 1.810678356s
	I0728 15:30:34.361610   25504 start.go:82] releasing machines lock for "pause-20220728152948-12923", held for 1.810720045s
	I0728 15:30:34.361694   25504 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-20220728152948-12923
	I0728 15:30:34.482924   25504 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:30:34.482925   25504 ssh_runner.go:195] Run: systemctl --version
	I0728 15:30:34.483014   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:34.483040   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:34.552244   25504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57651 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728152948-12923/id_rsa Username:docker}
	I0728 15:30:34.552258   25504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57651 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728152948-12923/id_rsa Username:docker}
	I0728 15:30:34.826709   25504 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:30:34.838049   25504 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:30:34.838115   25504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:30:34.849930   25504 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:30:34.863361   25504 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:30:34.957799   25504 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:30:35.056406   25504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:30:35.158845   25504 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:30:51.192849   25504 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.034197843s)
	I0728 15:30:51.192916   25504 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 15:30:51.311480   25504 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:30:51.463091   25504 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 15:30:51.480084   25504 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 15:30:51.480159   25504 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 15:30:51.484819   25504 start.go:471] Will wait 60s for crictl version
	I0728 15:30:51.484873   25504 ssh_runner.go:195] Run: sudo crictl version
	I0728 15:30:51.527110   25504 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 15:30:51.527180   25504 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:30:51.602676   25504 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:30:51.731579   25504 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 15:30:51.731668   25504 cli_runner.go:164] Run: docker exec -t pause-20220728152948-12923 dig +short host.docker.internal
	I0728 15:30:51.871733   25504 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:30:51.871863   25504 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:30:51.879700   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:51.949436   25504 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:30:51.949505   25504 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:30:52.000726   25504 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 15:30:52.000744   25504 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:30:52.000809   25504 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:30:52.035801   25504 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 15:30:52.035823   25504 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:30:52.035898   25504 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:30:52.144336   25504 cni.go:95] Creating CNI manager for ""
	I0728 15:30:52.144349   25504 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:30:52.144367   25504 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:30:52.144381   25504 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220728152948-12923 NodeName:pause-20220728152948-12923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:30:52.144489   25504 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "pause-20220728152948-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:30:52.144573   25504 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=pause-20220728152948-12923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:pause-20220728152948-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:30:52.144631   25504 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 15:30:52.152326   25504 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:30:52.152382   25504 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:30:52.160123   25504 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (488 bytes)
	I0728 15:30:52.188240   25504 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:30:52.200937   25504 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2048 bytes)
	I0728 15:30:52.213823   25504 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:30:52.217742   25504 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923 for IP: 192.168.67.2
	I0728 15:30:52.217843   25504 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:30:52.217899   25504 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:30:52.217978   25504 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/client.key
	I0728 15:30:52.218038   25504 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/apiserver.key.c7fa3a9e
	I0728 15:30:52.218104   25504 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/proxy-client.key
	I0728 15:30:52.218319   25504 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:30:52.218363   25504 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:30:52.218377   25504 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:30:52.218409   25504 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:30:52.218439   25504 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:30:52.218467   25504 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:30:52.218528   25504 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:30:52.219182   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:30:52.237552   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0728 15:30:52.254695   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:30:52.271680   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 15:30:52.289725   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:30:52.307362   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:30:52.325437   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:30:52.342300   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:30:52.359200   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:30:52.376734   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:30:52.407578   25504 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:30:52.500276   25504 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:30:52.522022   25504 ssh_runner.go:195] Run: openssl version
	I0728 15:30:52.579319   25504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:30:52.588370   25504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:30:52.595112   25504 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:30:52.595184   25504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:30:52.603838   25504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:30:52.614802   25504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:30:52.629638   25504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:30:52.678738   25504 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:30:52.678793   25504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:30:52.687040   25504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:30:52.697532   25504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:30:52.707524   25504 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:30:52.714476   25504 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:30:52.714544   25504 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:30:52.722316   25504 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:30:52.784395   25504 kubeadm.go:395] StartCluster: {Name:pause-20220728152948-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:pause-20220728152948-12923 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:30:52.784506   25504 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:30:52.824542   25504 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:30:52.833975   25504 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 15:30:52.833996   25504 kubeadm.go:626] restartCluster start
	I0728 15:30:52.834074   25504 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 15:30:52.886554   25504 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:30:52.886632   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:52.956202   25504 kubeconfig.go:92] found "pause-20220728152948-12923" server: "https://127.0.0.1:57650"
	I0728 15:30:52.956628   25504 kapi.go:59] client config for pause-20220728152948-12923: &rest.Config{Host:"https://127.0.0.1:57650", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:30:52.957177   25504 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 15:30:52.977967   25504 api_server.go:165] Checking apiserver status ...
	I0728 15:30:52.978058   25504 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:30:52.988071   25504 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4786/cgroup
	W0728 15:30:52.996643   25504 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4786/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:30:52.996659   25504 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57650/healthz ...
	I0728 15:30:55.796860   25504 api_server.go:266] https://127.0.0.1:57650/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:30:55.796910   25504 retry.go:31] will retry after 263.082536ms: https://127.0.0.1:57650/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:30:56.061269   25504 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57650/healthz ...
	I0728 15:30:56.069871   25504 api_server.go:266] https://127.0.0.1:57650/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:30:56.069894   25504 retry.go:31] will retry after 381.329545ms: https://127.0.0.1:57650/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:30:56.451402   25504 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57650/healthz ...
	I0728 15:30:56.459214   25504 api_server.go:266] https://127.0.0.1:57650/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:30:56.459232   25504 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:57650/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:30:56.882647   25504 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57650/healthz ...
	I0728 15:30:56.888459   25504 api_server.go:266] https://127.0.0.1:57650/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:30:56.888480   25504 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:57650/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:30:57.363327   25504 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57650/healthz ...
	I0728 15:30:57.370757   25504 api_server.go:266] https://127.0.0.1:57650/healthz returned 200:
	ok
	I0728 15:30:57.382063   25504 system_pods.go:86] 6 kube-system pods found
	I0728 15:30:57.382080   25504 system_pods.go:89] "coredns-6d4b75cb6d-85ds4" [413e6b61-7e29-4c25-a9f7-9343ee09b5be] Running
	I0728 15:30:57.382088   25504 system_pods.go:89] "etcd-pause-20220728152948-12923" [75b999b0-ad78-4e57-be70-726437dd9f17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 15:30:57.382095   25504 system_pods.go:89] "kube-apiserver-pause-20220728152948-12923" [128415e8-1d72-4933-ad90-707b64505b44] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0728 15:30:57.382100   25504 system_pods.go:89] "kube-controller-manager-pause-20220728152948-12923" [0d617bb5-8579-4ac7-995b-9a5bc3ed3b4f] Running
	I0728 15:30:57.382104   25504 system_pods.go:89] "kube-proxy-h87p7" [8b18ec6c-4080-4c7d-8911-462ea480c966] Running
	I0728 15:30:57.382110   25504 system_pods.go:89] "kube-scheduler-pause-20220728152948-12923" [6b0b006b-2c64-4a21-87c1-0e5cef51c937] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 15:30:57.383148   25504 api_server.go:140] control plane version: v1.24.3
	I0728 15:30:57.383162   25504 kubeadm.go:620] The running cluster does not require reconfiguration: 127.0.0.1
	I0728 15:30:57.383170   25504 kubeadm.go:674] Taking a shortcut, as the cluster seems to be properly configured
	I0728 15:30:57.383176   25504 kubeadm.go:630] restartCluster took 4.549234962s
	I0728 15:30:57.383181   25504 kubeadm.go:397] StartCluster complete in 4.598856377s
	I0728 15:30:57.383190   25504 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:30:57.383259   25504 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:30:57.383652   25504 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:30:57.384458   25504 kapi.go:59] client config for pause-20220728152948-12923: &rest.Config{Host:"https://127.0.0.1:57650", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:30:57.386780   25504 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220728152948-12923" rescaled to 1
	I0728 15:30:57.386817   25504 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 15:30:57.386831   25504 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 15:30:57.386857   25504 addons.go:412] enableAddons start: toEnable=map[], additional=[]
	I0728 15:30:57.428958   25504 out.go:177] * Verifying Kubernetes components...
	I0728 15:30:57.386975   25504 config.go:178] Loaded profile config "pause-20220728152948-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:30:57.429023   25504 addons.go:65] Setting storage-provisioner=true in profile "pause-20220728152948-12923"
	I0728 15:30:57.429040   25504 addons.go:65] Setting default-storageclass=true in profile "pause-20220728152948-12923"
	I0728 15:30:57.437804   25504 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0728 15:30:57.450032   25504 addons.go:153] Setting addon storage-provisioner=true in "pause-20220728152948-12923"
	W0728 15:30:57.450043   25504 addons.go:162] addon storage-provisioner should already be in state true
	I0728 15:30:57.450044   25504 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220728152948-12923"
	I0728 15:30:57.450069   25504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:30:57.450097   25504 host.go:66] Checking if "pause-20220728152948-12923" exists ...
	I0728 15:30:57.450363   25504 cli_runner.go:164] Run: docker container inspect pause-20220728152948-12923 --format={{.State.Status}}
	I0728 15:30:57.450914   25504 cli_runner.go:164] Run: docker container inspect pause-20220728152948-12923 --format={{.State.Status}}
	I0728 15:30:57.471532   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:57.522427   25504 kapi.go:59] client config for pause-20220728152948-12923: &rest.Config{Host:"https://127.0.0.1:57650", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/pause-20220728152948-12923/clie
nt.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x22fd0c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0728 15:30:57.543623   25504 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:30:57.547613   25504 addons.go:153] Setting addon default-storageclass=true in "pause-20220728152948-12923"
	W0728 15:30:57.564586   25504 addons.go:162] addon default-storageclass should already be in state true
	I0728 15:30:57.564624   25504 host.go:66] Checking if "pause-20220728152948-12923" exists ...
	I0728 15:30:57.564696   25504 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:30:57.564715   25504 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 15:30:57.564806   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:57.566614   25504 cli_runner.go:164] Run: docker container inspect pause-20220728152948-12923 --format={{.State.Status}}
	I0728 15:30:57.576179   25504 node_ready.go:35] waiting up to 6m0s for node "pause-20220728152948-12923" to be "Ready" ...
	I0728 15:30:57.579645   25504 node_ready.go:49] node "pause-20220728152948-12923" has status "Ready":"True"
	I0728 15:30:57.579655   25504 node_ready.go:38] duration metric: took 3.437207ms waiting for node "pause-20220728152948-12923" to be "Ready" ...
	I0728 15:30:57.579663   25504 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:30:57.584604   25504 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-85ds4" in "kube-system" namespace to be "Ready" ...
	I0728 15:30:57.589919   25504 pod_ready.go:92] pod "coredns-6d4b75cb6d-85ds4" in "kube-system" namespace has status "Ready":"True"
	I0728 15:30:57.589930   25504 pod_ready.go:81] duration metric: took 5.311991ms waiting for pod "coredns-6d4b75cb6d-85ds4" in "kube-system" namespace to be "Ready" ...
	I0728 15:30:57.589943   25504 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220728152948-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:30:57.636548   25504 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 15:30:57.636559   25504 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 15:30:57.636614   25504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-20220728152948-12923
	I0728 15:30:57.637469   25504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57651 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728152948-12923/id_rsa Username:docker}
	I0728 15:30:57.701719   25504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57651 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/pause-20220728152948-12923/id_rsa Username:docker}
	I0728 15:30:57.731201   25504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:30:57.793699   25504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 15:30:58.301746   25504 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0728 15:30:58.343194   25504 addons.go:414] enableAddons completed in 956.322879ms
	I0728 15:30:59.604327   25504 pod_ready.go:102] pod "etcd-pause-20220728152948-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:31:02.104695   25504 pod_ready.go:102] pod "etcd-pause-20220728152948-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:31:03.602354   25504 pod_ready.go:92] pod "etcd-pause-20220728152948-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:31:03.602366   25504 pod_ready.go:81] duration metric: took 6.012498299s waiting for pod "etcd-pause-20220728152948-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:31:03.602373   25504 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220728152948-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:31:04.114972   25504 pod_ready.go:92] pod "kube-apiserver-pause-20220728152948-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:31:04.114986   25504 pod_ready.go:81] duration metric: took 512.615449ms waiting for pod "kube-apiserver-pause-20220728152948-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:31:04.114995   25504 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220728152948-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:31:05.627472   25504 pod_ready.go:92] pod "kube-controller-manager-pause-20220728152948-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:31:05.627484   25504 pod_ready.go:81] duration metric: took 1.512504024s waiting for pod "kube-controller-manager-pause-20220728152948-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:31:05.627491   25504 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h87p7" in "kube-system" namespace to be "Ready" ...
	I0728 15:31:05.631248   25504 pod_ready.go:92] pod "kube-proxy-h87p7" in "kube-system" namespace has status "Ready":"True"
	I0728 15:31:05.631256   25504 pod_ready.go:81] duration metric: took 3.754871ms waiting for pod "kube-proxy-h87p7" in "kube-system" namespace to be "Ready" ...
	I0728 15:31:05.631261   25504 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220728152948-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:31:05.635033   25504 pod_ready.go:92] pod "kube-scheduler-pause-20220728152948-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:31:05.635041   25504 pod_ready.go:81] duration metric: took 3.775583ms waiting for pod "kube-scheduler-pause-20220728152948-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:31:05.635045   25504 pod_ready.go:38] duration metric: took 8.055469484s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:31:05.635062   25504 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:31:05.635109   25504 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:31:05.644340   25504 api_server.go:71] duration metric: took 8.25761631s to wait for apiserver process to appear ...
	I0728 15:31:05.644350   25504 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:31:05.644356   25504 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:57650/healthz ...
	I0728 15:31:05.649450   25504 api_server.go:266] https://127.0.0.1:57650/healthz returned 200:
	ok
	I0728 15:31:05.650590   25504 api_server.go:140] control plane version: v1.24.3
	I0728 15:31:05.650598   25504 api_server.go:130] duration metric: took 6.244483ms to wait for apiserver health ...
	I0728 15:31:05.650603   25504 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:31:05.655030   25504 system_pods.go:59] 7 kube-system pods found
	I0728 15:31:05.655042   25504 system_pods.go:61] "coredns-6d4b75cb6d-85ds4" [413e6b61-7e29-4c25-a9f7-9343ee09b5be] Running
	I0728 15:31:05.655045   25504 system_pods.go:61] "etcd-pause-20220728152948-12923" [75b999b0-ad78-4e57-be70-726437dd9f17] Running
	I0728 15:31:05.655049   25504 system_pods.go:61] "kube-apiserver-pause-20220728152948-12923" [128415e8-1d72-4933-ad90-707b64505b44] Running
	I0728 15:31:05.655052   25504 system_pods.go:61] "kube-controller-manager-pause-20220728152948-12923" [0d617bb5-8579-4ac7-995b-9a5bc3ed3b4f] Running
	I0728 15:31:05.655057   25504 system_pods.go:61] "kube-proxy-h87p7" [8b18ec6c-4080-4c7d-8911-462ea480c966] Running
	I0728 15:31:05.655061   25504 system_pods.go:61] "kube-scheduler-pause-20220728152948-12923" [6b0b006b-2c64-4a21-87c1-0e5cef51c937] Running
	I0728 15:31:05.655078   25504 system_pods.go:61] "storage-provisioner" [ccf4c686-4b17-4128-b9d7-bcea8e6fa66c] Running
	I0728 15:31:05.655084   25504 system_pods.go:74] duration metric: took 4.477889ms to wait for pod list to return data ...
	I0728 15:31:05.655088   25504 default_sa.go:34] waiting for default service account to be created ...
	I0728 15:31:05.656940   25504 default_sa.go:45] found service account: "default"
	I0728 15:31:05.656948   25504 default_sa.go:55] duration metric: took 1.856109ms for default service account to be created ...
	I0728 15:31:05.656952   25504 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 15:31:05.803752   25504 system_pods.go:86] 7 kube-system pods found
	I0728 15:31:05.803764   25504 system_pods.go:89] "coredns-6d4b75cb6d-85ds4" [413e6b61-7e29-4c25-a9f7-9343ee09b5be] Running
	I0728 15:31:05.803769   25504 system_pods.go:89] "etcd-pause-20220728152948-12923" [75b999b0-ad78-4e57-be70-726437dd9f17] Running
	I0728 15:31:05.803772   25504 system_pods.go:89] "kube-apiserver-pause-20220728152948-12923" [128415e8-1d72-4933-ad90-707b64505b44] Running
	I0728 15:31:05.803776   25504 system_pods.go:89] "kube-controller-manager-pause-20220728152948-12923" [0d617bb5-8579-4ac7-995b-9a5bc3ed3b4f] Running
	I0728 15:31:05.803779   25504 system_pods.go:89] "kube-proxy-h87p7" [8b18ec6c-4080-4c7d-8911-462ea480c966] Running
	I0728 15:31:05.803785   25504 system_pods.go:89] "kube-scheduler-pause-20220728152948-12923" [6b0b006b-2c64-4a21-87c1-0e5cef51c937] Running
	I0728 15:31:05.803789   25504 system_pods.go:89] "storage-provisioner" [ccf4c686-4b17-4128-b9d7-bcea8e6fa66c] Running
	I0728 15:31:05.803793   25504 system_pods.go:126] duration metric: took 146.839639ms to wait for k8s-apps to be running ...
	I0728 15:31:05.803797   25504 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 15:31:05.803844   25504 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:31:05.813257   25504 system_svc.go:56] duration metric: took 9.455234ms WaitForService to wait for kubelet.
	I0728 15:31:05.813269   25504 kubeadm.go:572] duration metric: took 8.42654886s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0728 15:31:05.813280   25504 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:31:06.002246   25504 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:31:06.002265   25504 node_conditions.go:123] node cpu capacity is 6
	I0728 15:31:06.002276   25504 node_conditions.go:105] duration metric: took 188.995379ms to run NodePressure ...
	I0728 15:31:06.002302   25504 start.go:216] waiting for startup goroutines ...
	I0728 15:31:06.031501   25504 start.go:506] kubectl: 1.24.2, cluster: 1.24.3 (minor skew: 0)
	I0728 15:31:06.053023   25504 out.go:177] * Done! kubectl is now configured to use "pause-20220728152948-12923" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 22:29:56 UTC, end at Thu 2022-07-28 22:31:39 UTC. --
	Jul 28 22:30:42 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:42.328319581Z" level=info msg="ignoring event" container=0a90ff2c32aef5abff18575af67a2c15ef554b048b55fd5a1dca7e31663eb7e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:30:42 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:42.331247360Z" level=info msg="ignoring event" container=c5c65df3d239af22494d4d09dcc6285df0c0f4b5510e9cb7e16debf86def143c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:30:42 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:42.340460700Z" level=info msg="ignoring event" container=e7d642fc63e680355678f6be88a62a4542f3706792ccf7a6c518c67071ba34de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:30:42 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:42.340505187Z" level=info msg="ignoring event" container=8cfa32d7dd93ac8bde0358638c1195419c4294639286656c42cf39dabded819d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:30:42 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:42.405466181Z" level=info msg="ignoring event" container=ec78e0c377e1b0179827d12ef12d90cd5f3c88fffdf2e84b6a9cb30d186f3675 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:30:42 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:42.717534275Z" level=info msg="ignoring event" container=c134f1b78b058d48b0153054275e7a599dab9009d1237dfce3308b4ca1625925 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:30:50 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:50.590199586Z" level=info msg="ignoring event" container=a43c1796e739506d065f1cf74c60b8a580bc71388de8b49dba328333dbdbc407 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:30:50 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:50.747607504Z" level=info msg="Removing stale sandbox 085cbc41bdd5be677528d3e8846afffbec6ff11728e57c2b43941347ad32a821 (0a90ff2c32aef5abff18575af67a2c15ef554b048b55fd5a1dca7e31663eb7e6)"
	Jul 28 22:30:50 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:50.748954575Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0bfcc0eccad14f43133c915b4748015332a1adbedd6e9c9368a554ef720cd31e 66e0be1c083fc242b31564d7f0814517c610a13ea227d180bc7fc4eeb87e3bd3], retrying...."
	Jul 28 22:30:50 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:50.836466449Z" level=info msg="Removing stale sandbox 4082c7f3e8631e83c8ffac0ab3c0d858e3e375ac687f7b9b17b8e3680975e828 (4ebb6b04064ebfc254c29060ff2a4ab0570eaceac6517eff70396709ad507617)"
	Jul 28 22:30:50 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:50.839941118Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint b674dbb7ce4032a1f216267aa81d57b2244159ee593a221230263aa601ae2918 fb3cd0093cad1303e353037dca12b70dbd076e882b7030c5c05626a99e1eb32e], retrying...."
	Jul 28 22:30:50 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:50.927414609Z" level=info msg="Removing stale sandbox 433b1bcd541618b6dda14395041cbb56fcf2548d8689048bd96e8d9543987dca (cd776c85d8ed631edbf31748fa7a692d755e3bda88a665b06335adfff8079fac)"
	Jul 28 22:30:50 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:50.928721029Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0bfcc0eccad14f43133c915b4748015332a1adbedd6e9c9368a554ef720cd31e c97435625a6f24dd7287f675070627689439ac0edd11cec91a2e4c210c471fb4], retrying...."
	Jul 28 22:30:51 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:51.014625736Z" level=info msg="Removing stale sandbox 479a5816b15810daee724e89f45dc41673b19b1756db04979ab60cea0f1cab47 (c5c65df3d239af22494d4d09dcc6285df0c0f4b5510e9cb7e16debf86def143c)"
	Jul 28 22:30:51 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:51.015977862Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0bfcc0eccad14f43133c915b4748015332a1adbedd6e9c9368a554ef720cd31e 6b649ad15141c3a8e44bf42d52934dbfad58b59f1cf2cf950039bbe1f0238309], retrying...."
	Jul 28 22:30:51 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:51.103239769Z" level=info msg="Removing stale sandbox b77844047774fc2984534070ee366bc6546d97e8f693bd704db399eec2cf3035 (8cfa32d7dd93ac8bde0358638c1195419c4294639286656c42cf39dabded819d)"
	Jul 28 22:30:51 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:51.104692264Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0bfcc0eccad14f43133c915b4748015332a1adbedd6e9c9368a554ef720cd31e 6b4fa62006b96cfea8f530d53658287261ad7e3313cee764a6398f804a7f6ca3], retrying...."
	Jul 28 22:30:51 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:51.127694188Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 28 22:30:51 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:51.167831030Z" level=info msg="Loading containers: done."
	Jul 28 22:30:51 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:51.177638735Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 28 22:30:51 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:51.177705508Z" level=info msg="Daemon has completed initialization"
	Jul 28 22:30:51 pause-20220728152948-12923 systemd[1]: Started Docker Application Container Engine.
	Jul 28 22:30:51 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:51.199998715Z" level=info msg="API listen on [::]:2376"
	Jul 28 22:30:51 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:51.202316193Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 28 22:30:51 pause-20220728152948-12923 dockerd[3919]: time="2022-07-28T22:30:51.313828115Z" level=error msg="Failed to compute size of container rootfs 4bb8266fe70e9185dec370f83ef5046cf1821372e75512b553ee6695cd087319: mount does not exist"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	1ffb62f011dbd       6e38f40d628db       39 seconds ago       Running             storage-provisioner       0                   c4e950aeb346a
	100b3142274b3       586c112956dfc       48 seconds ago       Running             kube-controller-manager   2                   63000382da0d9
	47e1bbc2c7286       3a5aa3a515f5d       48 seconds ago       Running             kube-scheduler            2                   d103691acdba2
	f2b83caae1ea8       aebe758cef4cd       48 seconds ago       Running             etcd                      2                   2d778bb64bdad
	bbdd5e71cca87       2ae1ba6417cbc       48 seconds ago       Running             kube-proxy                2                   5855ae5645a9d
	5a2ac95865ea9       a4ca41631cc7a       49 seconds ago       Running             coredns                   2                   85582dc6588d6
	c82b42fd65142       d521dd763e2e3       49 seconds ago       Running             kube-apiserver            1                   88529ebe5b6c6
	e7d642fc63e68       aebe758cef4cd       About a minute ago   Exited              etcd                      1                   0a90ff2c32aef
	a43c1796e7395       a4ca41631cc7a       About a minute ago   Exited              coredns                   1                   4ebb6b04064eb
	351eab81842fc       2ae1ba6417cbc       About a minute ago   Exited              kube-proxy                1                   c5c65df3d239a
	c134f1b78b058       3a5aa3a515f5d       About a minute ago   Exited              kube-scheduler            1                   8cfa32d7dd93a
	ec78e0c377e1b       586c112956dfc       About a minute ago   Exited              kube-controller-manager   1                   cd776c85d8ed6
	e44bd1ff2d3f4       d521dd763e2e3       About a minute ago   Exited              kube-apiserver            0                   da15f278f5980
	
	* 
	* ==> coredns [5a2ac95865ea] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> coredns [a43c1796e739] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.001439] FS-Cache: O-key=[8] '377f2e0300000000'
	[  +0.001052] FS-Cache: N-cookie c=000000003d1587d4 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001741] FS-Cache: N-cookie d=00000000b3020e27 n=00000000e968409d
	[  +0.001451] FS-Cache: N-key=[8] '377f2e0300000000'
	[  +0.001921] FS-Cache: Duplicate cookie detected
	[  +0.001022] FS-Cache: O-cookie c=00000000fc272a13 [p=00000000a81fea9c fl=226 nc=0 na=1]
	[  +0.001775] FS-Cache: O-cookie d=00000000b3020e27 n=0000000005e7aa76
	[  +0.001457] FS-Cache: O-key=[8] '377f2e0300000000'
	[  +0.001097] FS-Cache: N-cookie c=000000003d1587d4 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001747] FS-Cache: N-cookie d=00000000b3020e27 n=00000000992429b3
	[  +0.001462] FS-Cache: N-key=[8] '377f2e0300000000'
	[  +3.054735] FS-Cache: Duplicate cookie detected
	[  +0.001042] FS-Cache: O-cookie c=00000000d2d2cc51 [p=00000000a81fea9c fl=226 nc=0 na=1]
	[  +0.001759] FS-Cache: O-cookie d=00000000b3020e27 n=0000000030e50417
	[  +0.001442] FS-Cache: O-key=[8] '367f2e0300000000'
	[  +0.001131] FS-Cache: N-cookie c=000000007bcf2158 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001760] FS-Cache: N-cookie d=00000000b3020e27 n=00000000d445df4c
	[  +0.001503] FS-Cache: N-key=[8] '367f2e0300000000'
	[  +0.439912] FS-Cache: Duplicate cookie detected
	[  +0.001029] FS-Cache: O-cookie c=000000000a15bb65 [p=00000000a81fea9c fl=226 nc=0 na=1]
	[  +0.001773] FS-Cache: O-cookie d=00000000b3020e27 n=00000000a4d7a621
	[  +0.001447] FS-Cache: O-key=[8] '3e7f2e0300000000'
	[  +0.001103] FS-Cache: N-cookie c=000000001f485fd0 [p=00000000a81fea9c fl=2 nc=0 na=1]
	[  +0.001738] FS-Cache: N-cookie d=00000000b3020e27 n=000000000eab18f1
	[  +0.001440] FS-Cache: N-key=[8] '3e7f2e0300000000'
	
	* 
	* ==> etcd [e7d642fc63e6] <==
	* {"level":"info","ts":"2022-07-28T22:30:40.736Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:30:40.736Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T22:30:40.736Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T22:30:42.031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-07-28T22:30:42.031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-07-28T22:30:42.031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T22:30:42.031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-07-28T22:30:42.031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-28T22:30:42.031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-07-28T22:30:42.031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-28T22:30:42.035Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-20220728152948-12923 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T22:30:42.035Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:30:42.035Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:30:42.035Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T22:30:42.035Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T22:30:42.037Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T22:30:42.037Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-07-28T22:30:42.268Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-28T22:30:42.302Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"pause-20220728152948-12923","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/07/28 22:30:42 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/28 22:30:42 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-28T22:30:42.306Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-07-28T22:30:42.307Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:30:42.309Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:30:42.309Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"pause-20220728152948-12923","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> etcd [f2b83caae1ea] <==
	* {"level":"info","ts":"2022-07-28T22:30:52.809Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-07-28T22:30:52.809Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-28T22:30:52.809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-07-28T22:30:52.809Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-07-28T22:30:52.809Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:30:52.809Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:30:52.810Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-28T22:30:52.810Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T22:30:52.810Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T22:30:52.810Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:30:52.810Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:30:54.102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2022-07-28T22:30:54.102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-07-28T22:30:54.102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-28T22:30:54.102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2022-07-28T22:30:54.102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-07-28T22:30:54.102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2022-07-28T22:30:54.102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-07-28T22:30:54.102Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-20220728152948-12923 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T22:30:54.102Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:30:54.102Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:30:54.102Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T22:30:54.102Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T22:30:54.104Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T22:30:54.105Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:31:50 up 52 min,  0 users,  load average: 0.76, 1.08, 0.84
	Linux pause-20220728152948-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [c82b42fd6514] <==
	* I0728 22:30:55.778274       1 establishing_controller.go:76] Starting EstablishingController
	I0728 22:30:55.778281       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0728 22:30:55.778293       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0728 22:30:55.778301       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0728 22:30:55.778321       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0728 22:30:55.779671       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0728 22:30:55.779702       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0728 22:30:55.779718       1 available_controller.go:491] Starting AvailableConditionController
	I0728 22:30:55.779721       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0728 22:30:55.780216       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0728 22:30:55.879853       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0728 22:30:55.880486       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0728 22:30:55.879888       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0728 22:30:55.881475       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0728 22:30:55.881510       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0728 22:30:55.881513       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0728 22:30:55.881486       1 cache.go:39] Caches are synced for autoregister controller
	I0728 22:30:55.887252       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0728 22:30:55.896905       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0728 22:30:56.559054       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0728 22:30:56.781553       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0728 22:30:58.245853       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0728 22:30:58.255899       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0728 22:30:58.260541       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0728 22:30:58.265064       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [e44bd1ff2d3f] <==
	* W0728 22:30:40.331563       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.359942       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.364719       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.386124       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.392657       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.401048       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.408049       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.426442       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.470834       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.479622       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.495695       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.556480       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.562090       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.574460       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.587076       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.611350       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.625028       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.630897       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.640176       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.647785       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.680476       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.721829       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.724252       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 22:30:40.724318       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	{"level":"warn","ts":"2022-07-28T22:30:41.402Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc002ebe1c0/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}
	
	* 
	* ==> kube-controller-manager [100b3142274b] <==
	* I0728 22:30:58.009419       1 publisher.go:107] Starting root CA certificate configmap publisher
	I0728 22:30:58.009425       1 shared_informer.go:255] Waiting for caches to sync for crt configmap
	I0728 22:30:58.010658       1 controllermanager.go:593] Started "podgc"
	I0728 22:30:58.010868       1 gc_controller.go:92] Starting GC controller
	I0728 22:30:58.010876       1 shared_informer.go:255] Waiting for caches to sync for GC
	I0728 22:30:58.011701       1 shared_informer.go:262] Caches are synced for tokens
	I0728 22:30:58.012047       1 controllermanager.go:593] Started "serviceaccount"
	I0728 22:30:58.012157       1 serviceaccounts_controller.go:117] Starting service account controller
	I0728 22:30:58.012162       1 shared_informer.go:255] Waiting for caches to sync for service account
	I0728 22:30:58.013271       1 controllermanager.go:593] Started "daemonset"
	I0728 22:30:58.013532       1 daemon_controller.go:284] Starting daemon sets controller
	I0728 22:30:58.013559       1 shared_informer.go:255] Waiting for caches to sync for daemon sets
	I0728 22:30:58.014481       1 controllermanager.go:593] Started "persistentvolume-binder"
	I0728 22:30:58.014576       1 pv_controller_base.go:311] Starting persistent volume controller
	I0728 22:30:58.014581       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
	I0728 22:30:58.016085       1 controllermanager.go:593] Started "ttl-after-finished"
	I0728 22:30:58.016191       1 ttlafterfinished_controller.go:109] Starting TTL after finished controller
	I0728 22:30:58.016198       1 shared_informer.go:255] Waiting for caches to sync for TTL after finished
	I0728 22:30:58.017503       1 controllermanager.go:593] Started "endpointslice"
	I0728 22:30:58.017608       1 endpointslice_controller.go:257] Starting endpoint slice controller
	I0728 22:30:58.017679       1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice
	I0728 22:30:58.018958       1 controllermanager.go:593] Started "statefulset"
	I0728 22:30:58.018987       1 stateful_set.go:147] Starting stateful set controller
	I0728 22:30:58.018993       1 shared_informer.go:255] Waiting for caches to sync for stateful set
	I0728 22:30:58.063194       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [ec78e0c377e1] <==
	* I0728 22:30:36.245958       1 serving.go:348] Generated self-signed cert in-memory
	I0728 22:30:36.391319       1 controllermanager.go:180] Version: v1.24.3
	I0728 22:30:36.391358       1 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:30:36.392170       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0728 22:30:36.392213       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0728 22:30:36.392169       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0728 22:30:36.392179       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-proxy [351eab81842f] <==
	* 
	* 
	* ==> kube-proxy [bbdd5e71cca8] <==
	* I0728 22:30:55.804120       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0728 22:30:55.804187       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0728 22:30:55.804205       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 22:30:55.892385       1 server_others.go:206] "Using iptables Proxier"
	I0728 22:30:55.892470       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 22:30:55.892481       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 22:30:55.892490       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 22:30:55.892509       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:30:55.893125       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:30:55.893532       1 server.go:661] "Version info" version="v1.24.3"
	I0728 22:30:55.893560       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:30:55.894805       1 config.go:317] "Starting service config controller"
	I0728 22:30:55.894835       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 22:30:55.894858       1 config.go:226] "Starting endpoint slice config controller"
	I0728 22:30:55.894863       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 22:30:55.895489       1 config.go:444] "Starting node config controller"
	I0728 22:30:55.895498       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 22:30:55.995057       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 22:30:55.995189       1 shared_informer.go:262] Caches are synced for service config
	I0728 22:30:55.995629       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [47e1bbc2c728] <==
	* I0728 22:30:53.134484       1 serving.go:348] Generated self-signed cert in-memory
	I0728 22:30:55.814255       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0728 22:30:55.814288       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:30:55.818953       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0728 22:30:55.819093       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0728 22:30:55.819176       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0728 22:30:55.819202       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0728 22:30:55.821018       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0728 22:30:55.821080       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 22:30:55.821161       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0728 22:30:55.821262       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0728 22:30:55.920384       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0728 22:30:55.922024       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0728 22:30:55.922135       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [c134f1b78b05] <==
	* I0728 22:30:37.361264       1 serving.go:348] Generated self-signed cert in-memory
	W0728 22:30:42.691207       1 authentication.go:346] Error looking up in-cluster authentication configuration: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.67.2:8443: connect: connection refused
	W0728 22:30:42.691240       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0728 22:30:42.691246       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0728 22:30:42.693632       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0728 22:30:42.693667       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:30:42.694618       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0728 22:30:42.694641       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0728 22:30:42.694656       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 22:30:42.694675       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0728 22:30:42.694710       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0728 22:30:42.694707       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0728 22:30:42.695934       1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 22:30:42.695973       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0728 22:30:42.697123       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 22:29:56 UTC, end at Thu 2022-07-28 22:31:52 UTC. --
	Jul 28 22:30:51 pause-20220728152948-12923 kubelet[1946]: I0728 22:30:51.295596    1946 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="cd776c85d8ed631edbf31748fa7a692d755e3bda88a665b06335adfff8079fac"
	Jul 28 22:30:51 pause-20220728152948-12923 kubelet[1946]: I0728 22:30:51.299283    1946 scope.go:110] "RemoveContainer" containerID="4bb8266fe70e9185dec370f83ef5046cf1821372e75512b553ee6695cd087319"
	Jul 28 22:30:51 pause-20220728152948-12923 kubelet[1946]: I0728 22:30:51.305359    1946 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0a90ff2c32aef5abff18575af67a2c15ef554b048b55fd5a1dca7e31663eb7e6"
	Jul 28 22:30:51 pause-20220728152948-12923 kubelet[1946]: I0728 22:30:51.305768    1946 status_manager.go:664] "Failed to get status for pod" podUID=db313a77afb770cefc0c50cfca94a550 pod="kube-system/etcd-pause-20220728152948-12923" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-20220728152948-12923\": dial tcp 192.168.67.2:8443: connect: connection refused"
	Jul 28 22:30:51 pause-20220728152948-12923 kubelet[1946]: I0728 22:30:51.315423    1946 scope.go:110] "RemoveContainer" containerID="635312f44f77e3a3bec9b3f7eb35cfbf5d3bace342ad1c408674db3da0c6fd25"
	Jul 28 22:30:51 pause-20220728152948-12923 kubelet[1946]: I0728 22:30:51.381302    1946 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="4ebb6b04064ebfc254c29060ff2a4ab0570eaceac6517eff70396709ad507617"
	Jul 28 22:30:51 pause-20220728152948-12923 kubelet[1946]: I0728 22:30:51.382033    1946 status_manager.go:664] "Failed to get status for pod" podUID=413e6b61-7e29-4c25-a9f7-9343ee09b5be pod="kube-system/coredns-6d4b75cb6d-85ds4" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-6d4b75cb6d-85ds4\": dial tcp 192.168.67.2:8443: connect: connection refused"
	Jul 28 22:30:51 pause-20220728152948-12923 kubelet[1946]: I0728 22:30:51.390214    1946 scope.go:110] "RemoveContainer" containerID="b0eb225cce1fff05fdffd3139a6f85f886c89bd010ee693128c0029af489711d"
	Jul 28 22:30:51 pause-20220728152948-12923 kubelet[1946]: E0728 22:30:51.792069    1946 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd pod=etcd-pause-20220728152948-12923_kube-system(db313a77afb770cefc0c50cfca94a550)\"" pod="kube-system/etcd-pause-20220728152948-12923" podUID=db313a77afb770cefc0c50cfca94a550
	Jul 28 22:30:51 pause-20220728152948-12923 kubelet[1946]: E0728 22:30:51.820594    1946 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-proxy pod=kube-proxy-h87p7_kube-system(8b18ec6c-4080-4c7d-8911-462ea480c966)\"" pod="kube-system/kube-proxy-h87p7" podUID=8b18ec6c-4080-4c7d-8911-462ea480c966
	Jul 28 22:30:52 pause-20220728152948-12923 kubelet[1946]: I0728 22:30:52.393204    1946 scope.go:110] "RemoveContainer" containerID="351eab81842fc67791d050a5221595e7b2bd10c69099f58e52486f5a8363cbad"
	Jul 28 22:30:52 pause-20220728152948-12923 kubelet[1946]: I0728 22:30:52.433055    1946 scope.go:110] "RemoveContainer" containerID="e7d642fc63e680355678f6be88a62a4542f3706792ccf7a6c518c67071ba34de"
	Jul 28 22:30:55 pause-20220728152948-12923 kubelet[1946]: E0728 22:30:55.781835    1946 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 28 22:30:55 pause-20220728152948-12923 kubelet[1946]: E0728 22:30:55.785212    1946 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 28 22:30:55 pause-20220728152948-12923 kubelet[1946]: E0728 22:30:55.785293    1946 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 28 22:31:00 pause-20220728152948-12923 kubelet[1946]: I0728 22:31:00.798018    1946 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 22:31:00 pause-20220728152948-12923 kubelet[1946]: E0728 22:31:00.798070    1946 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="5a857247-4bff-473d-b3dd-164471c56780" containerName="coredns"
	Jul 28 22:31:00 pause-20220728152948-12923 kubelet[1946]: I0728 22:31:00.798089    1946 memory_manager.go:345] "RemoveStaleState removing state" podUID="5a857247-4bff-473d-b3dd-164471c56780" containerName="coredns"
	Jul 28 22:31:00 pause-20220728152948-12923 kubelet[1946]: I0728 22:31:00.962921    1946 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ccf4c686-4b17-4128-b9d7-bcea8e6fa66c-tmp\") pod \"storage-provisioner\" (UID: \"ccf4c686-4b17-4128-b9d7-bcea8e6fa66c\") " pod="kube-system/storage-provisioner"
	Jul 28 22:31:00 pause-20220728152948-12923 kubelet[1946]: I0728 22:31:00.963069    1946 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlx44\" (UniqueName: \"kubernetes.io/projected/ccf4c686-4b17-4128-b9d7-bcea8e6fa66c-kube-api-access-qlx44\") pod \"storage-provisioner\" (UID: \"ccf4c686-4b17-4128-b9d7-bcea8e6fa66c\") " pod="kube-system/storage-provisioner"
	Jul 28 22:31:01 pause-20220728152948-12923 kubelet[1946]: I0728 22:31:01.495407    1946 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=5a857247-4bff-473d-b3dd-164471c56780 path="/var/lib/kubelet/pods/5a857247-4bff-473d-b3dd-164471c56780/volumes"
	Jul 28 22:31:06 pause-20220728152948-12923 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jul 28 22:31:06 pause-20220728152948-12923 systemd[1]: kubelet.service: Succeeded.
	Jul 28 22:31:06 pause-20220728152948-12923 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 28 22:31:06 pause-20220728152948-12923 systemd[1]: kubelet.service: Consumed 1.685s CPU time.
	
	* 
	* ==> storage-provisioner [1ffb62f011db] <==
	* I0728 22:31:01.297517       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 22:31:01.308782       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 22:31:01.308862       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 22:31:01.319884       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 22:31:01.320016       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220728152948-12923_07962a57-537b-4f96-ae63-11ecc559be9b!
	I0728 22:31:01.320740       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6159ca1c-8c41-41cc-890e-7bda2981cdce", APIVersion:"v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220728152948-12923_07962a57-537b-4f96-ae63-11ecc559be9b became leader
	I0728 22:31:01.420347       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220728152948-12923_07962a57-537b-4f96-ae63-11ecc559be9b!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:31:50.435646   25671 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220728152948-12923 -n pause-20220728152948-12923
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p pause-20220728152948-12923 -n pause-20220728152948-12923: exit status 2 (16.09857661s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-20220728152948-12923" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestPause/serial/VerifyStatus (61.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (250.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220728153807-12923 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220728153807-12923 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m9.660741756s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220728153807-12923] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-20220728153807-12923 in cluster old-k8s-version-20220728153807-12923
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 15:38:07.250726   27927 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:38:07.250882   27927 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:38:07.250887   27927 out.go:309] Setting ErrFile to fd 2...
	I0728 15:38:07.250891   27927 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:38:07.250996   27927 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:38:07.251517   27927 out.go:303] Setting JSON to false
	I0728 15:38:07.266615   27927 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":8929,"bootTime":1659038958,"procs":351,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:38:07.266690   27927 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:38:07.290353   27927 out.go:177] * [old-k8s-version-20220728153807-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:38:07.311861   27927 notify.go:193] Checking for updates...
	I0728 15:38:07.333343   27927 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:38:07.354534   27927 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:38:07.376593   27927 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:38:07.399713   27927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:38:07.421825   27927 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:38:07.444324   27927 config.go:178] Loaded profile config "kubenet-20220728152330-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:38:07.444410   27927 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:38:07.512412   27927 docker.go:137] docker version: linux-20.10.17
	I0728 15:38:07.512532   27927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:38:07.641928   27927 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:38:07.583548413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:38:07.663820   27927 out.go:177] * Using the docker driver based on user configuration
	I0728 15:38:07.707907   27927 start.go:284] selected driver: docker
	I0728 15:38:07.707933   27927 start.go:808] validating driver "docker" against <nil>
	I0728 15:38:07.707956   27927 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:38:07.711402   27927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:38:07.842633   27927 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:38:07.783661183 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:38:07.842731   27927 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0728 15:38:07.842863   27927 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:38:07.864790   27927 out.go:177] * Using Docker Desktop driver with root privileges
	I0728 15:38:07.886701   27927 cni.go:95] Creating CNI manager for ""
	I0728 15:38:07.886733   27927 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:38:07.886752   27927 start_flags.go:310] config:
	{Name:old-k8s-version-20220728153807-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:38:07.908696   27927 out.go:177] * Starting control plane node old-k8s-version-20220728153807-12923 in cluster old-k8s-version-20220728153807-12923
	I0728 15:38:07.951653   27927 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:38:07.973592   27927 out.go:177] * Pulling base image ...
	I0728 15:38:08.016517   27927 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:38:08.016519   27927 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:38:08.016568   27927 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0728 15:38:08.016577   27927 cache.go:57] Caching tarball of preloaded images
	I0728 15:38:08.016675   27927 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:38:08.016686   27927 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0728 15:38:08.017303   27927 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/config.json ...
	I0728 15:38:08.017397   27927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/config.json: {Name:mk032086ee7c2bed36eac8a5783250f0b059be4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:38:08.078596   27927 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:38:08.078621   27927 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:38:08.078632   27927 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:38:08.078685   27927 start.go:370] acquiring machines lock for old-k8s-version-20220728153807-12923: {Name:mke15a14ac0b96e8c97ba263723c52eb5c7e7def Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:38:08.078832   27927 start.go:374] acquired machines lock for "old-k8s-version-20220728153807-12923" in 136.234µs
	I0728 15:38:08.078861   27927 start.go:92] Provisioning new machine with config: &{Name:old-k8s-version-20220728153807-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 15:38:08.078921   27927 start.go:132] createHost starting for "" (driver="docker")
	I0728 15:38:08.100743   27927 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0728 15:38:08.101185   27927 start.go:166] libmachine.API.Create for "old-k8s-version-20220728153807-12923" (driver="docker")
	I0728 15:38:08.101279   27927 client.go:168] LocalClient.Create starting
	I0728 15:38:08.101464   27927 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem
	I0728 15:38:08.101555   27927 main.go:134] libmachine: Decoding PEM data...
	I0728 15:38:08.101606   27927 main.go:134] libmachine: Parsing certificate...
	I0728 15:38:08.101732   27927 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem
	I0728 15:38:08.101791   27927 main.go:134] libmachine: Decoding PEM data...
	I0728 15:38:08.101814   27927 main.go:134] libmachine: Parsing certificate...
	I0728 15:38:08.102468   27927 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220728153807-12923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0728 15:38:08.164751   27927 cli_runner.go:211] docker network inspect old-k8s-version-20220728153807-12923 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0728 15:38:08.164841   27927 network_create.go:272] running [docker network inspect old-k8s-version-20220728153807-12923] to gather additional debugging logs...
	I0728 15:38:08.164856   27927 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220728153807-12923
	W0728 15:38:08.226254   27927 cli_runner.go:211] docker network inspect old-k8s-version-20220728153807-12923 returned with exit code 1
	I0728 15:38:08.226283   27927 network_create.go:275] error running [docker network inspect old-k8s-version-20220728153807-12923]: docker network inspect old-k8s-version-20220728153807-12923: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220728153807-12923
	I0728 15:38:08.226304   27927 network_create.go:277] output of [docker network inspect old-k8s-version-20220728153807-12923]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220728153807-12923
	
	** /stderr **
	I0728 15:38:08.226384   27927 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0728 15:38:08.286893   27927 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000c10800] misses:0}
	I0728 15:38:08.286930   27927 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:38:08.286946   27927 network_create.go:115] attempt to create docker network old-k8s-version-20220728153807-12923 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0728 15:38:08.287017   27927 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220728153807-12923 old-k8s-version-20220728153807-12923
	W0728 15:38:08.347685   27927 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220728153807-12923 old-k8s-version-20220728153807-12923 returned with exit code 1
	W0728 15:38:08.347717   27927 network_create.go:107] failed to create docker network old-k8s-version-20220728153807-12923 192.168.49.0/24, will retry: subnet is taken
	I0728 15:38:08.348021   27927 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c10800] amended:false}} dirty:map[] misses:0}
	I0728 15:38:08.348036   27927 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:38:08.348247   27927 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c10800] amended:true}} dirty:map[192.168.49.0:0xc000c10800 192.168.58.0:0xc00080c1f8] misses:0}
	I0728 15:38:08.348262   27927 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:38:08.348269   27927 network_create.go:115] attempt to create docker network old-k8s-version-20220728153807-12923 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0728 15:38:08.348362   27927 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220728153807-12923 old-k8s-version-20220728153807-12923
	W0728 15:38:08.410077   27927 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220728153807-12923 old-k8s-version-20220728153807-12923 returned with exit code 1
	W0728 15:38:08.410117   27927 network_create.go:107] failed to create docker network old-k8s-version-20220728153807-12923 192.168.58.0/24, will retry: subnet is taken
	I0728 15:38:08.410399   27927 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c10800] amended:true}} dirty:map[192.168.49.0:0xc000c10800 192.168.58.0:0xc00080c1f8] misses:1}
	I0728 15:38:08.410421   27927 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:38:08.410625   27927 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c10800] amended:true}} dirty:map[192.168.49.0:0xc000c10800 192.168.58.0:0xc00080c1f8 192.168.67.0:0xc000c10838] misses:1}
	I0728 15:38:08.410644   27927 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:38:08.410656   27927 network_create.go:115] attempt to create docker network old-k8s-version-20220728153807-12923 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0728 15:38:08.410713   27927 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220728153807-12923 old-k8s-version-20220728153807-12923
	W0728 15:38:08.473077   27927 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220728153807-12923 old-k8s-version-20220728153807-12923 returned with exit code 1
	W0728 15:38:08.473108   27927 network_create.go:107] failed to create docker network old-k8s-version-20220728153807-12923 192.168.67.0/24, will retry: subnet is taken
	I0728 15:38:08.473395   27927 network.go:279] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c10800] amended:true}} dirty:map[192.168.49.0:0xc000c10800 192.168.58.0:0xc00080c1f8 192.168.67.0:0xc000c10838] misses:2}
	I0728 15:38:08.473417   27927 network.go:238] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:38:08.473613   27927 network.go:288] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000c10800] amended:true}} dirty:map[192.168.49.0:0xc000c10800 192.168.58.0:0xc00080c1f8 192.168.67.0:0xc000c10838 192.168.76.0:0xc00080c230] misses:2}
	I0728 15:38:08.473627   27927 network.go:235] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0728 15:38:08.473634   27927 network_create.go:115] attempt to create docker network old-k8s-version-20220728153807-12923 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0728 15:38:08.473686   27927 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-20220728153807-12923 old-k8s-version-20220728153807-12923
	I0728 15:38:08.567146   27927 network_create.go:99] docker network old-k8s-version-20220728153807-12923 192.168.76.0/24 created
	I0728 15:38:08.567176   27927 kic.go:106] calculated static IP "192.168.76.2" for the "old-k8s-version-20220728153807-12923" container
	I0728 15:38:08.567295   27927 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0728 15:38:08.631262   27927 cli_runner.go:164] Run: docker volume create old-k8s-version-20220728153807-12923 --label name.minikube.sigs.k8s.io=old-k8s-version-20220728153807-12923 --label created_by.minikube.sigs.k8s.io=true
	I0728 15:38:08.693876   27927 oci.go:103] Successfully created a docker volume old-k8s-version-20220728153807-12923
	I0728 15:38:08.693995   27927 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-20220728153807-12923-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220728153807-12923 --entrypoint /usr/bin/test -v old-k8s-version-20220728153807-12923:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -d /var/lib
	I0728 15:38:09.194496   27927 oci.go:107] Successfully prepared a docker volume old-k8s-version-20220728153807-12923
	I0728 15:38:09.194552   27927 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:38:09.194566   27927 kic.go:179] Starting extracting preloaded images to volume ...
	I0728 15:38:09.194680   27927 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220728153807-12923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir
	I0728 15:38:13.925302   27927 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220728153807-12923:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 -I lz4 -xf /preloaded.tar -C /extractDir: (4.730616433s)
	I0728 15:38:13.925323   27927 kic.go:188] duration metric: took 4.730819 seconds to extract preloaded images to volume
	I0728 15:38:13.925434   27927 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0728 15:38:14.073519   27927 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220728153807-12923 --name old-k8s-version-20220728153807-12923 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220728153807-12923 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220728153807-12923 --network old-k8s-version-20220728153807-12923 --ip 192.168.76.2 --volume old-k8s-version-20220728153807-12923:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842
	I0728 15:38:14.505737   27927 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220728153807-12923 --format={{.State.Running}}
	I0728 15:38:14.571899   27927 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220728153807-12923 --format={{.State.Status}}
	I0728 15:38:14.643096   27927 cli_runner.go:164] Run: docker exec old-k8s-version-20220728153807-12923 stat /var/lib/dpkg/alternatives/iptables
	I0728 15:38:14.769306   27927 oci.go:144] the created container "old-k8s-version-20220728153807-12923" has a running status.
	I0728 15:38:14.769335   27927 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa...
	I0728 15:38:14.961744   27927 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0728 15:38:15.081728   27927 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220728153807-12923 --format={{.State.Status}}
	I0728 15:38:15.150577   27927 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0728 15:38:15.150594   27927 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220728153807-12923 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0728 15:38:15.263761   27927 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220728153807-12923 --format={{.State.Status}}
	I0728 15:38:15.330850   27927 machine.go:88] provisioning docker machine ...
	I0728 15:38:15.330897   27927 ubuntu.go:169] provisioning hostname "old-k8s-version-20220728153807-12923"
	I0728 15:38:15.330990   27927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:38:15.395648   27927 main.go:134] libmachine: Using SSH client type: native
	I0728 15:38:15.395871   27927 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58781 <nil> <nil>}
	I0728 15:38:15.395892   27927 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220728153807-12923 && echo "old-k8s-version-20220728153807-12923" | sudo tee /etc/hostname
	I0728 15:38:15.525188   27927 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220728153807-12923
	
	I0728 15:38:15.525269   27927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:38:15.588709   27927 main.go:134] libmachine: Using SSH client type: native
	I0728 15:38:15.588988   27927 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58781 <nil> <nil>}
	I0728 15:38:15.589004   27927 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220728153807-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220728153807-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220728153807-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:38:15.709466   27927 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:38:15.709492   27927 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:38:15.709518   27927 ubuntu.go:177] setting up certificates
	I0728 15:38:15.709529   27927 provision.go:83] configureAuth start
	I0728 15:38:15.709595   27927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220728153807-12923
	I0728 15:38:15.774215   27927 provision.go:138] copyHostCerts
	I0728 15:38:15.774308   27927 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:38:15.774317   27927 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:38:15.774415   27927 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:38:15.774595   27927 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:38:15.774613   27927 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:38:15.774677   27927 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:38:15.774836   27927 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:38:15.774843   27927 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:38:15.774902   27927 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:38:15.775013   27927 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220728153807-12923 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220728153807-12923]
	I0728 15:38:15.899503   27927 provision.go:172] copyRemoteCerts
	I0728 15:38:15.899563   27927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:38:15.899611   27927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:38:15.963981   27927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58781 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:38:16.050382   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 15:38:16.066578   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:38:16.087903   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0728 15:38:16.108951   27927 provision.go:86] duration metric: configureAuth took 399.409148ms
	I0728 15:38:16.108966   27927 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:38:16.109115   27927 config.go:178] Loaded profile config "old-k8s-version-20220728153807-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0728 15:38:16.109180   27927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:38:16.177890   27927 main.go:134] libmachine: Using SSH client type: native
	I0728 15:38:16.178067   27927 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58781 <nil> <nil>}
	I0728 15:38:16.178085   27927 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:38:16.297211   27927 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:38:16.297224   27927 ubuntu.go:71] root file system type: overlay
	I0728 15:38:16.297346   27927 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:38:16.297412   27927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:38:16.363516   27927 main.go:134] libmachine: Using SSH client type: native
	I0728 15:38:16.363692   27927 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58781 <nil> <nil>}
	I0728 15:38:16.363743   27927 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:38:16.492801   27927 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:38:16.492886   27927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:38:16.556575   27927 main.go:134] libmachine: Using SSH client type: native
	I0728 15:38:16.556776   27927 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58781 <nil> <nil>}
	I0728 15:38:16.556790   27927 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:38:17.202459   27927 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-06-06 23:01:03.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-07-28 22:38:16.511658139 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0728 15:38:17.202499   27927 machine.go:91] provisioned docker machine in 1.871649133s
	I0728 15:38:17.202508   27927 client.go:171] LocalClient.Create took 9.101339793s
	I0728 15:38:17.202539   27927 start.go:174] duration metric: libmachine.API.Create for "old-k8s-version-20220728153807-12923" took 9.101474061s
	I0728 15:38:17.202560   27927 start.go:307] post-start starting for "old-k8s-version-20220728153807-12923" (driver="docker")
	I0728 15:38:17.202570   27927 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:38:17.202670   27927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:38:17.202741   27927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:38:17.288304   27927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58781 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:38:17.380618   27927 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:38:17.385616   27927 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:38:17.385635   27927 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:38:17.385646   27927 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:38:17.385655   27927 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:38:17.385675   27927 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:38:17.385789   27927 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:38:17.385925   27927 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:38:17.386076   27927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:38:17.396010   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:38:17.423596   27927 start.go:310] post-start completed in 221.017184ms
	I0728 15:38:17.424131   27927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220728153807-12923
	I0728 15:38:17.496370   27927 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/config.json ...
	I0728 15:38:17.496804   27927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:38:17.496857   27927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:38:17.562865   27927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58781 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:38:17.646793   27927 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:38:17.651157   27927 start.go:135] duration metric: createHost completed in 9.572356191s
	I0728 15:38:17.651175   27927 start.go:82] releasing machines lock for "old-k8s-version-20220728153807-12923", held for 9.572460526s
	I0728 15:38:17.651237   27927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220728153807-12923
	I0728 15:38:17.715411   27927 ssh_runner.go:195] Run: systemctl --version
	I0728 15:38:17.715416   27927 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:38:17.715479   27927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:38:17.715504   27927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:38:17.785144   27927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58781 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:38:17.785149   27927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58781 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:38:18.063520   27927 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:38:18.074596   27927 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:38:18.074650   27927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:38:18.083660   27927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:38:18.097614   27927 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:38:18.163505   27927 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:38:18.244996   27927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:38:18.317060   27927 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:38:18.522239   27927 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:38:18.556554   27927 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:38:18.667066   27927 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0728 15:38:18.667168   27927 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220728153807-12923 dig +short host.docker.internal
	I0728 15:38:18.803331   27927 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:38:18.803425   27927 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:38:18.807907   27927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:38:18.817213   27927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:38:18.881578   27927 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:38:18.881652   27927 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:38:18.910887   27927 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0728 15:38:18.910903   27927 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:38:18.910987   27927 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:38:18.940953   27927 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0728 15:38:18.940975   27927 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:38:18.941048   27927 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:38:19.016486   27927 cni.go:95] Creating CNI manager for ""
	I0728 15:38:19.016499   27927 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:38:19.016511   27927 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:38:19.016523   27927 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220728153807-12923 NodeName:old-k8s-version-20220728153807-12923 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:38:19.016644   27927 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220728153807-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220728153807-12923
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:38:19.016721   27927 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220728153807-12923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:38:19.016785   27927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0728 15:38:19.024430   27927 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:38:19.024491   27927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:38:19.031591   27927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0728 15:38:19.045956   27927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:38:19.058828   27927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0728 15:38:19.073795   27927 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:38:19.077968   27927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:38:19.088322   27927 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923 for IP: 192.168.76.2
	I0728 15:38:19.088448   27927 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:38:19.088504   27927 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:38:19.088552   27927 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/client.key
	I0728 15:38:19.088566   27927 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/client.crt with IP's: []
	I0728 15:38:19.252005   27927 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/client.crt ...
	I0728 15:38:19.252027   27927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/client.crt: {Name:mka033eff282b954f3b2dde5752bd32e62161e74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:38:19.252325   27927 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/client.key ...
	I0728 15:38:19.252334   27927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/client.key: {Name:mkbda0593c134af0dbf46c1c2781d8a68993e61e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:38:19.252530   27927 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.key.31bdca25
	I0728 15:38:19.252545   27927 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0728 15:38:19.307652   27927 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.crt.31bdca25 ...
	I0728 15:38:19.307668   27927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.crt.31bdca25: {Name:mk669d1ca2802bdb63bdbad49562b2b5961e4fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:38:19.307932   27927 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.key.31bdca25 ...
	I0728 15:38:19.307940   27927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.key.31bdca25: {Name:mkf4d087e4021e6ada42b74fefa47868c546b41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:38:19.308131   27927 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.crt
	I0728 15:38:19.308292   27927 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.key
	I0728 15:38:19.308459   27927 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.key
	I0728 15:38:19.308474   27927 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.crt with IP's: []
	I0728 15:38:19.455366   27927 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.crt ...
	I0728 15:38:19.455381   27927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.crt: {Name:mk845294ccfee4134a79875b8362eed7cb69091f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:38:19.455658   27927 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.key ...
	I0728 15:38:19.455665   27927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.key: {Name:mk74e0c9db3049489b1f3eb531bde2d6b1d72bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:38:19.456042   27927 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:38:19.456082   27927 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:38:19.456094   27927 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:38:19.456124   27927 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:38:19.456153   27927 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:38:19.456180   27927 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:38:19.456242   27927 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:38:19.456685   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:38:19.474959   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0728 15:38:19.493378   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:38:19.512070   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 15:38:19.528727   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:38:19.549639   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:38:19.566917   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:38:19.583472   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:38:19.602953   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:38:19.621469   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:38:19.640433   27927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:38:19.660220   27927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:38:19.673942   27927 ssh_runner.go:195] Run: openssl version
	I0728 15:38:19.679322   27927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:38:19.688705   27927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:38:19.692671   27927 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:38:19.692712   27927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:38:19.697809   27927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:38:19.705558   27927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:38:19.713390   27927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:38:19.717330   27927 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:38:19.717371   27927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:38:19.722257   27927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:38:19.729645   27927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:38:19.738898   27927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:38:19.742608   27927 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:38:19.742650   27927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:38:19.747708   27927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:38:19.755280   27927 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220728153807-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:38:19.755368   27927 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:38:19.783507   27927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:38:19.792258   27927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:38:19.799510   27927 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:38:19.799554   27927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:38:19.806695   27927 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:38:19.806716   27927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:38:20.568004   27927 out.go:204]   - Generating certificates and keys ...
	I0728 15:38:22.412571   27927 out.go:204]   - Booting up control plane ...
	W0728 15:40:17.352088   27927 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220728153807-12923 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220728153807-12923 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-20220728153807-12923 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-20220728153807-12923 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0728 15:40:17.352125   27927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0728 15:40:17.775034   27927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:40:17.784129   27927 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:40:17.784185   27927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:40:17.791107   27927 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:40:17.791133   27927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:40:18.541329   27927 out.go:204]   - Generating certificates and keys ...
	I0728 15:40:19.326055   27927 out.go:204]   - Booting up control plane ...
	I0728 15:42:14.239101   27927 kubeadm.go:397] StartCluster complete in 3m54.485080664s
	I0728 15:42:14.239179   27927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:42:14.268541   27927 logs.go:274] 0 containers: []
	W0728 15:42:14.268557   27927 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:42:14.268616   27927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:42:14.298089   27927 logs.go:274] 0 containers: []
	W0728 15:42:14.298102   27927 logs.go:276] No container was found matching "etcd"
	I0728 15:42:14.298162   27927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:42:14.326063   27927 logs.go:274] 0 containers: []
	W0728 15:42:14.326075   27927 logs.go:276] No container was found matching "coredns"
	I0728 15:42:14.326135   27927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:42:14.355747   27927 logs.go:274] 0 containers: []
	W0728 15:42:14.355759   27927 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:42:14.355818   27927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:42:14.385008   27927 logs.go:274] 0 containers: []
	W0728 15:42:14.385020   27927 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:42:14.385075   27927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:42:14.413496   27927 logs.go:274] 0 containers: []
	W0728 15:42:14.413509   27927 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:42:14.413571   27927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:42:14.441883   27927 logs.go:274] 0 containers: []
	W0728 15:42:14.441896   27927 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:42:14.441954   27927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:42:14.470599   27927 logs.go:274] 0 containers: []
	W0728 15:42:14.470610   27927 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:42:14.470617   27927 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:42:14.470626   27927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:42:14.522139   27927 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:42:14.522154   27927 logs.go:123] Gathering logs for Docker ...
	I0728 15:42:14.522161   27927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:42:14.537954   27927 logs.go:123] Gathering logs for container status ...
	I0728 15:42:14.537966   27927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:42:16.592814   27927 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054870383s)
	I0728 15:42:16.592951   27927 logs.go:123] Gathering logs for kubelet ...
	I0728 15:42:16.592958   27927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:42:16.632638   27927 logs.go:123] Gathering logs for dmesg ...
	I0728 15:42:16.632650   27927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0728 15:42:16.644203   27927 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0728 15:42:16.644218   27927 out.go:239] * 
	* 
	W0728 15:42:16.644342   27927 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:42:16.644357   27927 out.go:239] * 
	* 
	W0728 15:42:16.644920   27927 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 15:42:16.707587   27927 out.go:177] 
	W0728 15:42:16.749727   27927 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:42:16.749881   27927 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0728 15:42:16.749961   27927 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0728 15:42:16.792503   27927 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220728153807-12923 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220728153807-12923
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220728153807-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f",
	        "Created": "2022-07-28T22:38:14.165684968Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 226367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:38:14.474965174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hosts",
	        "LogPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f-json.log",
	        "Name": "/old-k8s-version-20220728153807-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220728153807-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220728153807-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220728153807-12923",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220728153807-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220728153807-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "583724ad15a1bb5d0d85e751ff38f95aa3b292879509514520f82dc83bcbd15c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58781"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58782"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58779"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58780"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/583724ad15a1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220728153807-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2056d9a86a4c",
	                        "old-k8s-version-20220728153807-12923"
	                    ],
	                    "NetworkID": "a0b55590b406427f4aa9e75be1fbe382dd54fa7a1c14e888e401b45bb478b32d",
	                    "EndpointID": "5b6d296354ccf12d05eec7dc23f27fb6b8a44e70e4b712e35f015256db9a0026",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 6 (414.940751ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:42:17.365290   28591 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220728153807-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220728153807-12923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (250.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (56.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.099833236s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.103998612s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.10538723s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 15:39:07.128527   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:39:07.133752   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:39:07.143888   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:39:07.164173   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:39:07.204321   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:39:07.284804   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:39:07.445026   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:39:07.765195   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0728 15:39:08.406088   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:39:09.688547   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:39:10.594923   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:39:10.601122   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:39:10.611869   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:39:10.634055   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:39:10.675156   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:39:10.755409   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:39:10.916019   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:39:11.236785   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:39:11.877537   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:39:12.249240   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:39:13.159660   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.113462113s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 15:39:15.720197   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:39:17.372126   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0728 15:39:20.841145   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.114519429s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0728 15:39:27.614106   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.104540843s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0728 15:39:31.082815   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.119071119s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (56.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-20220728153807-12923 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220728153807-12923 create -f testdata/busybox.yaml: exit status 1 (29.666021ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220728153807-12923" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-20220728153807-12923 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220728153807-12923
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220728153807-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f",
	        "Created": "2022-07-28T22:38:14.165684968Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 226367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:38:14.474965174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hosts",
	        "LogPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f-json.log",
	        "Name": "/old-k8s-version-20220728153807-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220728153807-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220728153807-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220728153807-12923",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220728153807-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220728153807-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "583724ad15a1bb5d0d85e751ff38f95aa3b292879509514520f82dc83bcbd15c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58781"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58782"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58779"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58780"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/583724ad15a1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220728153807-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2056d9a86a4c",
	                        "old-k8s-version-20220728153807-12923"
	                    ],
	                    "NetworkID": "a0b55590b406427f4aa9e75be1fbe382dd54fa7a1c14e888e401b45bb478b32d",
	                    "EndpointID": "5b6d296354ccf12d05eec7dc23f27fb6b8a44e70e4b712e35f015256db9a0026",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 6 (420.964151ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:42:17.881261   28604 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220728153807-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220728153807-12923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220728153807-12923
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220728153807-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f",
	        "Created": "2022-07-28T22:38:14.165684968Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 226367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:38:14.474965174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hosts",
	        "LogPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f-json.log",
	        "Name": "/old-k8s-version-20220728153807-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220728153807-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220728153807-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220728153807-12923",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220728153807-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220728153807-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "583724ad15a1bb5d0d85e751ff38f95aa3b292879509514520f82dc83bcbd15c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58781"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58782"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58779"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58780"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/583724ad15a1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220728153807-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2056d9a86a4c",
	                        "old-k8s-version-20220728153807-12923"
	                    ],
	                    "NetworkID": "a0b55590b406427f4aa9e75be1fbe382dd54fa7a1c14e888e401b45bb478b32d",
	                    "EndpointID": "5b6d296354ccf12d05eec7dc23f27fb6b8a44e70e4b712e35f015256db9a0026",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 6 (424.35149ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:42:18.372009   28618 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220728153807-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220728153807-12923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (1.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220728153807-12923 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0728 15:42:29.355644   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:29.360805   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:29.371660   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:29.391890   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:29.432163   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:29.545909   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:29.707361   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:30.029521   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:30.671985   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:31.954236   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:33.818787   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:34.515843   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:37.931132   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 15:42:39.636275   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:49.876310   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:51.979490   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:51.985912   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:51.998110   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:52.018294   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:52.058545   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:52.138656   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:52.299601   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:52.620711   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:53.260994   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:54.543275   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:57.104945   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:02.227136   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:04.782139   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:43:10.357455   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:43:12.469353   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:14.778250   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:27.412190   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:43:29.782562   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:43:32.951168   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:36.605579   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:36.611284   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:36.623463   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:36.645666   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:36.686778   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:36.767629   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:36.929944   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:37.250340   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:37.890486   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:39.171333   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:41.732012   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:43:46.852576   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220728153807-12923 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.189238552s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220728153807-12923 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-20220728153807-12923 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220728153807-12923 describe deploy/metrics-server -n kube-system: exit status 1 (30.175107ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-20220728153807-12923" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-20220728153807-12923 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220728153807-12923
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220728153807-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f",
	        "Created": "2022-07-28T22:38:14.165684968Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 226367,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:38:14.474965174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hosts",
	        "LogPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f-json.log",
	        "Name": "/old-k8s-version-20220728153807-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220728153807-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220728153807-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220728153807-12923",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220728153807-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220728153807-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "583724ad15a1bb5d0d85e751ff38f95aa3b292879509514520f82dc83bcbd15c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58781"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58782"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58779"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58780"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/583724ad15a1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220728153807-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2056d9a86a4c",
	                        "old-k8s-version-20220728153807-12923"
	                    ],
	                    "NetworkID": "a0b55590b406427f4aa9e75be1fbe382dd54fa7a1c14e888e401b45bb478b32d",
	                    "EndpointID": "5b6d296354ccf12d05eec7dc23f27fb6b8a44e70e4b712e35f015256db9a0026",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 6 (418.344291ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:43:48.074551   28719 status.go:413] kubeconfig endpoint: extract IP: "old-k8s-version-20220728153807-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-20220728153807-12923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (491.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220728153807-12923 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0728 15:43:51.317077   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:43:57.092739   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:44:07.124662   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:44:10.592190   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:44:13.912780   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:44:17.574638   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:44:27.826544   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:44:34.819199   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:44:36.698220   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:44:38.303785   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:44:58.535209   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:45:13.236106   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:45:35.831640   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:45:43.562906   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:45:45.935960   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-20220728153807-12923 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m6.470256486s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220728153807-12923] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	* Kubernetes 1.24.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-20220728153807-12923 in cluster old-k8s-version-20220728153807-12923
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220728153807-12923" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 15:43:50.132817   28750 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:43:50.132989   28750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:43:50.132995   28750 out.go:309] Setting ErrFile to fd 2...
	I0728 15:43:50.133000   28750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:43:50.133108   28750 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:43:50.133582   28750 out.go:303] Setting JSON to false
	I0728 15:43:50.149553   28750 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9272,"bootTime":1659038958,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:43:50.149639   28750 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:43:50.171234   28750 out.go:177] * [old-k8s-version-20220728153807-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:43:50.193226   28750 notify.go:193] Checking for updates...
	I0728 15:43:50.215046   28750 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:43:50.237023   28750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:43:50.257931   28750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:43:50.279132   28750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:43:50.301171   28750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:43:50.323702   28750 config.go:178] Loaded profile config "old-k8s-version-20220728153807-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0728 15:43:50.345915   28750 out.go:177] * Kubernetes 1.24.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.3
	I0728 15:43:50.367017   28750 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:43:50.437569   28750 docker.go:137] docker version: linux-20.10.17
	I0728 15:43:50.437729   28750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:43:50.569692   28750 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:43:50.498204227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:43:50.591689   28750 out.go:177] * Using the docker driver based on existing profile
	I0728 15:43:50.613510   28750 start.go:284] selected driver: docker
	I0728 15:43:50.613538   28750 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220728153807-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:43:50.613718   28750 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:43:50.617013   28750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:43:50.747972   28750 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:43:50.676530285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:43:50.748120   28750 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:43:50.748138   28750 cni.go:95] Creating CNI manager for ""
	I0728 15:43:50.748148   28750 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:43:50.748159   28750 start_flags.go:310] config:
	{Name:old-k8s-version-20220728153807-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:43:50.791553   28750 out.go:177] * Starting control plane node old-k8s-version-20220728153807-12923 in cluster old-k8s-version-20220728153807-12923
	I0728 15:43:50.812795   28750 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:43:50.834772   28750 out.go:177] * Pulling base image ...
	I0728 15:43:50.876918   28750 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:43:50.876976   28750 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:43:50.877001   28750 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0728 15:43:50.877022   28750 cache.go:57] Caching tarball of preloaded images
	I0728 15:43:50.877208   28750 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:43:50.877230   28750 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0728 15:43:50.878252   28750 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/config.json ...
	I0728 15:43:50.941312   28750 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:43:50.941328   28750 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:43:50.941340   28750 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:43:50.941397   28750 start.go:370] acquiring machines lock for old-k8s-version-20220728153807-12923: {Name:mke15a14ac0b96e8c97ba263723c52eb5c7e7def Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:43:50.941474   28750 start.go:374] acquired machines lock for "old-k8s-version-20220728153807-12923" in 57.265µs
	I0728 15:43:50.941495   28750 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:43:50.941503   28750 fix.go:55] fixHost starting: 
	I0728 15:43:50.941727   28750 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220728153807-12923 --format={{.State.Status}}
	I0728 15:43:51.004580   28750 fix.go:103] recreateIfNeeded on old-k8s-version-20220728153807-12923: state=Stopped err=<nil>
	W0728 15:43:51.004619   28750 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:43:51.026654   28750 out.go:177] * Restarting existing docker container for "old-k8s-version-20220728153807-12923" ...
	I0728 15:43:51.069483   28750 cli_runner.go:164] Run: docker start old-k8s-version-20220728153807-12923
	I0728 15:43:51.432239   28750 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220728153807-12923 --format={{.State.Status}}
	I0728 15:43:51.497121   28750 kic.go:415] container "old-k8s-version-20220728153807-12923" state is running.
	I0728 15:43:51.497698   28750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220728153807-12923
	I0728 15:43:51.568555   28750 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/config.json ...
	I0728 15:43:51.568955   28750 machine.go:88] provisioning docker machine ...
	I0728 15:43:51.568976   28750 ubuntu.go:169] provisioning hostname "old-k8s-version-20220728153807-12923"
	I0728 15:43:51.569046   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:51.636172   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:51.636370   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:51.636385   28750 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220728153807-12923 && echo "old-k8s-version-20220728153807-12923" | sudo tee /etc/hostname
	I0728 15:43:51.762903   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220728153807-12923
	
	I0728 15:43:51.762993   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:51.828455   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:51.828606   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:51.828621   28750 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220728153807-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220728153807-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220728153807-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:43:51.949269   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:43:51.949293   28750 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:43:51.949317   28750 ubuntu.go:177] setting up certificates
	I0728 15:43:51.949328   28750 provision.go:83] configureAuth start
	I0728 15:43:51.949396   28750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220728153807-12923
	I0728 15:43:52.013262   28750 provision.go:138] copyHostCerts
	I0728 15:43:52.013379   28750 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:43:52.013389   28750 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:43:52.013487   28750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:43:52.013675   28750 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:43:52.013683   28750 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:43:52.013741   28750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:43:52.013881   28750 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:43:52.013887   28750 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:43:52.013945   28750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:43:52.014068   28750 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220728153807-12923 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220728153807-12923]
	I0728 15:43:52.162837   28750 provision.go:172] copyRemoteCerts
	I0728 15:43:52.162892   28750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:43:52.162936   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:52.226854   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:52.314899   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:43:52.331775   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0728 15:43:52.349209   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 15:43:52.366683   28750 provision.go:86] duration metric: configureAuth took 417.345293ms
	I0728 15:43:52.366697   28750 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:43:52.366840   28750 config.go:178] Loaded profile config "old-k8s-version-20220728153807-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0728 15:43:52.366907   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:52.432300   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:52.432458   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:52.432469   28750 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:43:52.556064   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:43:52.556075   28750 ubuntu.go:71] root file system type: overlay
	I0728 15:43:52.556206   28750 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:43:52.556278   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:52.620853   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:52.621084   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:52.621129   28750 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:43:52.751843   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:43:52.751916   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:52.816883   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:52.817041   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:52.817055   28750 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:43:52.941836   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:43:52.941853   28750 machine.go:91] provisioned docker machine in 1.372912502s
	I0728 15:43:52.941863   28750 start.go:307] post-start starting for "old-k8s-version-20220728153807-12923" (driver="docker")
	I0728 15:43:52.941870   28750 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:43:52.941934   28750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:43:52.941995   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:53.006600   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:53.094280   28750 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:43:53.100080   28750 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:43:53.100098   28750 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:43:53.100105   28750 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:43:53.100109   28750 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:43:53.100119   28750 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:43:53.100242   28750 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:43:53.100374   28750 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:43:53.100517   28750 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:43:53.109632   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:43:53.126762   28750 start.go:310] post-start completed in 184.891915ms
	I0728 15:43:53.126836   28750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:43:53.126883   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:53.191616   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:53.276705   28750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:43:53.281015   28750 fix.go:57] fixHost completed within 2.33954993s
	I0728 15:43:53.281029   28750 start.go:82] releasing machines lock for "old-k8s-version-20220728153807-12923", held for 2.339584988s
	I0728 15:43:53.281105   28750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220728153807-12923
	I0728 15:43:53.345999   28750 ssh_runner.go:195] Run: systemctl --version
	I0728 15:43:53.346002   28750 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:43:53.346069   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:53.346083   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:53.415502   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:53.416382   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:53.693282   28750 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:43:53.703210   28750 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:43:53.703267   28750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:43:53.715068   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:43:53.728140   28750 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:43:53.798778   28750 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:43:53.864441   28750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:43:53.929027   28750 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:43:54.130959   28750 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:43:54.167626   28750 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:43:54.246239   28750 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0728 15:43:54.246432   28750 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220728153807-12923 dig +short host.docker.internal
	I0728 15:43:54.362961   28750 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:43:54.363076   28750 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:43:54.367718   28750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:43:54.377807   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:54.476552   28750 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:43:54.476614   28750 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:43:54.506826   28750 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0728 15:43:54.506844   28750 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:43:54.506923   28750 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:43:54.537701   28750 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0728 15:43:54.537724   28750 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:43:54.537804   28750 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:43:54.609845   28750 cni.go:95] Creating CNI manager for ""
	I0728 15:43:54.609857   28750 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:43:54.609873   28750 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:43:54.609888   28750 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220728153807-12923 NodeName:old-k8s-version-20220728153807-12923 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:43:54.610015   28750 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220728153807-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220728153807-12923
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:43:54.610095   28750 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220728153807-12923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:43:54.610152   28750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0728 15:43:54.618258   28750 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:43:54.618312   28750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:43:54.625914   28750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0728 15:43:54.638312   28750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:43:54.650390   28750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0728 15:43:54.662650   28750 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:43:54.666258   28750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:43:54.675591   28750 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923 for IP: 192.168.76.2
	I0728 15:43:54.675702   28750 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:43:54.675752   28750 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:43:54.675828   28750 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/client.key
	I0728 15:43:54.675888   28750 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.key.31bdca25
	I0728 15:43:54.675949   28750 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.key
	I0728 15:43:54.676161   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:43:54.676201   28750 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:43:54.676214   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:43:54.676249   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:43:54.676282   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:43:54.676311   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:43:54.676370   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:43:54.676906   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:43:54.693525   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0728 15:43:54.710007   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:43:54.727109   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 15:43:54.743956   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:43:54.760573   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:43:54.777182   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:43:54.793800   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:43:54.810385   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:43:54.826768   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:43:54.843784   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:43:54.860371   28750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:43:54.873089   28750 ssh_runner.go:195] Run: openssl version
	I0728 15:43:54.878350   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:43:54.886133   28750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:43:54.889944   28750 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:43:54.889982   28750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:43:54.896504   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:43:54.903918   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:43:54.911623   28750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:43:54.915545   28750 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:43:54.915585   28750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:43:54.920977   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:43:54.928142   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:43:54.935893   28750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:43:54.939932   28750 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:43:54.939977   28750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:43:54.945076   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:43:54.952023   28750 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220728153807-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:43:54.952124   28750 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:43:54.982413   28750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:43:54.990129   28750 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 15:43:54.990147   28750 kubeadm.go:626] restartCluster start
	I0728 15:43:54.990193   28750 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 15:43:54.997084   28750 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:54.997139   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:55.061683   28750 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220728153807-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:43:55.061868   28750 kubeconfig.go:127] "old-k8s-version-20220728153807-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 15:43:55.062205   28750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:43:55.063638   28750 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 15:43:55.071259   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.071320   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.079503   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:55.280076   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.280184   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.290411   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:55.481690   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.481806   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.492191   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:55.681640   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.681852   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.693077   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:55.881629   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.881805   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.893813   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.081620   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.081769   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.092929   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.281611   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.281821   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.292761   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.479869   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.480047   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.490772   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.679673   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.679846   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.690437   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.881685   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.881791   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.892358   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.079845   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.079982   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.090531   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.280055   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.280190   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.291095   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.480150   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.480244   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.492691   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.681615   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.681760   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.693150   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.881328   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.881469   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.892688   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:58.081706   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:58.081861   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:58.093332   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:58.093342   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:58.093387   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:58.101659   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:58.101671   28750 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 15:43:58.101676   28750 kubeadm.go:1092] stopping kube-system containers ...
	I0728 15:43:58.101734   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:43:58.130995   28750 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 15:43:58.141397   28750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:43:58.149507   28750 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5751 Jul 28 22:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5787 Jul 28 22:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Jul 28 22:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Jul 28 22:40 /etc/kubernetes/scheduler.conf
	
	I0728 15:43:58.149568   28750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 15:43:58.157415   28750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 15:43:58.165088   28750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 15:43:58.172300   28750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 15:43:58.179816   28750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:43:58.187386   28750 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 15:43:58.187397   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:58.238316   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:59.009658   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:59.230098   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:59.286178   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:59.342104   28750 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:43:59.342164   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:43:59.852670   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:00.352781   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:00.850650   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:01.352768   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:01.850866   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:02.351446   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:02.850606   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:03.351150   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:03.851365   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:04.352535   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:04.852723   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:05.350807   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:05.852624   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:06.352589   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:06.851125   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:07.350565   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:07.852643   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:08.350474   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:08.850445   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:09.352534   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:09.850933   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:10.352606   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:10.852619   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:11.350440   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:11.852134   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:12.352473   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:12.851013   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:13.352270   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:13.850370   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:14.350630   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:14.851959   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:15.352566   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:15.851616   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:16.350762   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:16.850420   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:17.350313   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:17.852472   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:18.350337   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:18.851370   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:19.350807   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:19.851563   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:20.351203   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:20.851730   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:21.350468   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:21.851009   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:22.350371   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:22.850766   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:23.351160   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:23.851721   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:24.351235   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:24.850785   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:25.351192   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:25.850201   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:26.350640   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:26.850236   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:27.350168   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:27.850786   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:28.351502   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:28.851514   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:29.350143   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:29.851249   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:30.350104   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:30.850231   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:31.352251   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:31.850849   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:32.350184   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:32.850157   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:33.351061   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:33.850197   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:34.351704   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:34.850967   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:35.350170   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:35.852079   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:36.350361   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:36.849970   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:37.352028   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:37.852028   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:38.352103   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:38.850752   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:39.349925   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:39.850497   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:40.350260   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:40.852112   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:41.350628   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:41.850335   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:42.350937   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:42.850588   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:43.350213   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:43.851905   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:44.350537   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:44.851886   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:45.351362   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:45.850422   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:46.350013   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:46.851847   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:47.350287   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:47.851880   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:48.349946   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:48.850339   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:49.350494   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:49.851141   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:50.350171   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:50.849782   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:51.350363   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:51.850156   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:52.351696   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:52.851835   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:53.349667   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:53.851882   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:54.351848   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:54.851044   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:55.351691   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:55.851300   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:56.351196   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:56.851744   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:57.351804   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:57.850801   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:58.350639   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:58.851158   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:59.349783   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:44:59.382837   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.382851   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:44:59.382917   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:44:59.412464   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.412476   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:44:59.412541   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:44:59.442864   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.442878   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:44:59.442939   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:44:59.474280   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.474292   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:44:59.474350   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:44:59.504175   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.504187   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:44:59.504249   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:44:59.533670   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.533684   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:44:59.533737   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:44:59.565362   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.565374   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:44:59.565431   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:44:59.595139   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.595151   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:44:59.595159   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:44:59.595166   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:44:59.609196   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:44:59.609210   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:01.663458   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054270661s)
	I0728 15:45:01.663570   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:01.663577   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:01.703232   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:01.703247   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:01.715560   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:01.715573   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:01.767426   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:04.268324   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:04.349908   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:04.380997   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.381016   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:04.381076   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:04.411821   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.411834   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:04.411892   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:04.441534   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.441546   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:04.441601   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:04.472385   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.472397   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:04.472486   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:04.501753   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.501766   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:04.501827   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:04.536867   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.536880   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:04.536936   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:04.567861   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.567875   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:04.567930   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:04.597628   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.597640   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:04.597647   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:04.597657   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:06.654101   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056467581s)
	I0728 15:45:06.654210   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:06.654217   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:06.694756   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:06.694770   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:06.707257   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:06.707270   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:06.761874   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:06.761884   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:06.761891   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:09.276908   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:09.351563   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:09.386142   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.386155   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:09.386219   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:09.418466   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.418478   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:09.418538   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:09.448308   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.448320   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:09.448380   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:09.479593   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.479607   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:09.479679   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:09.508030   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.508043   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:09.508099   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:09.537779   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.537792   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:09.537846   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:09.566993   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.567006   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:09.567065   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:09.596654   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.596672   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:09.596682   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:09.596738   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:09.649892   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:09.649903   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:09.649919   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:09.664184   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:09.664200   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:11.716355   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052177269s)
	I0728 15:45:11.716505   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:11.716513   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:11.755880   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:11.755897   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:14.268633   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:14.349684   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:14.380092   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.380128   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:14.380189   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:14.410724   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.410736   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:14.410797   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:14.439371   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.439384   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:14.439439   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:14.469393   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.469406   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:14.469468   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:14.498223   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.498241   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:14.498310   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:14.527916   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.527928   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:14.527993   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:14.557360   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.557378   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:14.557437   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:14.586000   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.586014   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:14.586021   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:14.586027   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:14.625146   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:14.625158   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:14.637616   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:14.637630   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:14.690046   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:14.690066   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:14.690072   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:14.703985   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:14.703997   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:16.758147   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054173236s)
	I0728 15:45:19.260560   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:19.351400   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:19.382794   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.382806   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:19.382867   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:19.412998   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.413010   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:19.413076   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:19.442557   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.442571   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:19.442639   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:19.475183   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.475196   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:19.475261   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:19.505391   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.505404   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:19.505469   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:19.536777   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.536793   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:19.536848   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:19.570024   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.570037   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:19.570094   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:19.599292   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.599304   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:19.599311   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:19.599318   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:19.639705   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:19.639722   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:19.651159   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:19.651172   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:19.703460   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:19.703471   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:19.703478   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:19.717843   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:19.717856   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:21.769960   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05212591s)
	I0728 15:45:24.270526   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:24.351276   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:24.382811   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.382824   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:24.382886   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:24.414444   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.414457   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:24.414517   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:24.443832   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.443845   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:24.443908   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:24.474162   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.474175   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:24.474237   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:24.503347   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.503359   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:24.503421   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:24.531984   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.531996   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:24.532053   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:24.562043   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.562057   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:24.562112   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:24.591508   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.591520   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:24.591528   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:24.591535   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:24.631583   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:24.631595   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:24.643477   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:24.643492   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:24.697351   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:24.697362   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:24.697368   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:24.711821   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:24.711834   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:26.770905   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059093267s)
	I0728 15:45:29.271547   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:29.349224   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:29.380066   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.380080   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:29.380151   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:29.409249   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.409261   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:29.409319   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:29.437151   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.437169   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:29.437240   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:29.467091   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.467103   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:29.467161   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:29.497532   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.497549   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:29.497615   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:29.526724   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.526737   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:29.526795   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:29.555433   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.555447   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:29.555505   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:29.584958   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.584972   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:29.584981   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:29.584988   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:29.624109   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:29.624122   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:29.635456   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:29.635476   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:29.687908   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:29.687924   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:29.687931   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:29.702012   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:29.702024   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:31.757527   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055526095s)
	I0728 15:45:34.258583   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:34.349159   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:34.379696   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.379712   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:34.379777   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:34.409678   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.409691   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:34.409750   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:34.448652   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.448666   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:34.448783   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:34.481247   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.481260   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:34.481331   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:34.515888   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.515900   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:34.515957   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:34.546279   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.546293   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:34.546361   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:34.578942   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.578959   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:34.579027   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:34.610475   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.610486   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:34.610493   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:34.610500   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:34.657901   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:34.657920   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:34.671775   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:34.671798   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:34.725845   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:34.725862   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:34.725869   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:34.743490   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:34.743511   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:36.796303   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05281482s)
	I0728 15:45:39.297144   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:39.349206   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:39.384998   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.385012   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:39.385074   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:39.415143   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.415155   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:39.415212   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:39.455721   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.455742   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:39.455813   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:39.486528   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.486545   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:39.486610   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:39.514977   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.514990   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:39.515048   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:39.550354   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.550367   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:39.550435   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:39.583427   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.583445   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:39.583507   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:39.613948   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.613963   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:39.613970   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:39.613976   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:41.665141   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051188026s)
	I0728 15:45:41.665254   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:41.665262   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:41.703690   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:41.703705   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:41.715446   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:41.715461   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:41.769895   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:41.769906   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:41.769913   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:44.283371   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:44.350279   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:44.381088   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.381106   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:44.381177   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:44.410783   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.410796   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:44.410859   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:44.439499   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.439511   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:44.439565   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:44.468617   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.468631   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:44.468687   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:44.502836   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.502850   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:44.502906   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:44.531631   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.531645   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:44.531710   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:44.562770   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.562782   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:44.562843   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:44.590589   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.590605   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:44.590612   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:44.590619   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:44.630687   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:44.630701   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:44.643944   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:44.643958   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:44.697537   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:44.697552   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:44.697560   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:44.711695   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:44.711708   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:46.766195   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054508784s)
	I0728 15:45:49.266834   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:49.350965   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:49.381946   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.381958   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:49.382017   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:49.411642   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.411655   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:49.411712   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:49.443920   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.443931   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:49.443989   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:49.489604   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.489617   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:49.489677   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:49.521878   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.521891   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:49.521946   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:49.550505   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.550518   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:49.550579   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:49.578158   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.578171   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:49.578228   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:49.606569   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.606582   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:49.606589   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:49.606596   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:49.647420   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:49.647434   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:49.659418   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:49.659430   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:49.712728   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:49.712739   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:49.712748   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:49.726477   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:49.726490   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:51.782399   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055932042s)
	I0728 15:45:54.282734   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:54.350836   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:54.380911   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.380923   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:54.380988   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:54.409653   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.409665   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:54.409728   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:54.437934   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.437948   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:54.438009   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:54.469669   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.469682   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:54.469762   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:54.497866   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.497878   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:54.497939   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:54.527154   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.527166   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:54.527225   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:54.555859   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.555872   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:54.555929   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:54.585491   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.585508   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:54.585515   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:54.585527   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:54.638036   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:54.638054   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:54.638060   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:54.651690   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:54.651703   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:56.704258   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052577476s)
	I0728 15:45:56.704368   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:56.704375   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:56.743901   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:56.743916   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:59.255613   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:59.348618   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:59.378663   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.378676   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:59.378733   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:59.407038   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.407050   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:59.407106   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:59.450158   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.450182   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:59.450263   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:59.481564   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.481576   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:59.481635   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:59.509158   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.509171   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:59.509229   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:59.547552   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.547570   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:59.547643   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:59.578542   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.578554   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:59.578613   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:59.606863   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.606876   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:59.606883   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:59.606892   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:59.649194   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:59.649222   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:59.663803   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:59.663819   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:59.714772   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:59.714789   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:59.714798   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:59.734190   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:59.734229   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:01.794612   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060404071s)
	I0728 15:46:04.294986   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:04.348576   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:04.378484   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.378498   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:04.378561   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:04.406624   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.406636   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:04.406692   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:04.445898   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.445930   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:04.445992   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:04.489972   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.489989   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:04.490075   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:04.530482   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.530498   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:04.530561   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:04.563512   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.563527   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:04.563586   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:04.597809   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.597825   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:04.597888   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:04.635527   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.635544   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:04.635553   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:04.635560   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:04.648400   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:04.648417   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:04.714199   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:04.714221   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:04.714234   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:04.731052   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:04.731068   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:06.793258   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.062211512s)
	I0728 15:46:06.793371   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:06.793474   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:09.339612   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:09.848472   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:09.877399   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.877411   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:09.877472   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:09.906396   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.906414   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:09.906480   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:09.936854   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.936869   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:09.936928   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:09.966233   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.966249   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:09.966315   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:09.996992   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.997005   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:09.997065   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:10.033579   28750 logs.go:274] 0 containers: []
	W0728 15:46:10.033593   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:10.033650   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:10.069419   28750 logs.go:274] 0 containers: []
	W0728 15:46:10.069433   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:10.069498   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:10.099084   28750 logs.go:274] 0 containers: []
	W0728 15:46:10.099097   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:10.099104   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:10.099112   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:10.112767   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:10.112787   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:10.173268   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:10.173288   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:10.173301   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:10.188909   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:10.188923   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:12.242044   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053144227s)
	I0728 15:46:12.242152   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:12.242158   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:14.784324   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:14.850444   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:14.882027   28750 logs.go:274] 0 containers: []
	W0728 15:46:14.882040   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:14.882097   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:14.912290   28750 logs.go:274] 0 containers: []
	W0728 15:46:14.912303   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:14.912361   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:14.946389   28750 logs.go:274] 0 containers: []
	W0728 15:46:14.946410   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:14.946488   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:14.978870   28750 logs.go:274] 0 containers: []
	W0728 15:46:14.978883   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:14.978943   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:15.008965   28750 logs.go:274] 0 containers: []
	W0728 15:46:15.008978   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:15.009036   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:15.037778   28750 logs.go:274] 0 containers: []
	W0728 15:46:15.037792   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:15.037852   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:15.066142   28750 logs.go:274] 0 containers: []
	W0728 15:46:15.066154   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:15.066212   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:15.097151   28750 logs.go:274] 0 containers: []
	W0728 15:46:15.097164   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:15.097172   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:15.097179   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:15.139648   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:15.168589   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:15.186371   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:15.186386   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:15.242465   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:15.242479   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:15.242491   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:15.256100   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:15.256112   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:17.308800   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052710598s)
	I0728 15:46:19.811135   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:19.848655   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:19.879363   28750 logs.go:274] 0 containers: []
	W0728 15:46:19.879375   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:19.879433   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:19.909343   28750 logs.go:274] 0 containers: []
	W0728 15:46:19.909355   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:19.909414   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:19.938912   28750 logs.go:274] 0 containers: []
	W0728 15:46:19.938925   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:19.938985   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:19.975343   28750 logs.go:274] 0 containers: []
	W0728 15:46:19.975357   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:19.975425   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:20.008264   28750 logs.go:274] 0 containers: []
	W0728 15:46:20.008275   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:20.008331   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:20.038658   28750 logs.go:274] 0 containers: []
	W0728 15:46:20.038670   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:20.038723   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:20.070456   28750 logs.go:274] 0 containers: []
	W0728 15:46:20.070470   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:20.070534   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:20.102016   28750 logs.go:274] 0 containers: []
	W0728 15:46:20.102029   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:20.102037   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:20.102046   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:20.114591   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:20.114610   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:20.174188   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:20.174201   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:20.174208   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:20.189645   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:20.189663   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:22.250872   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061232319s)
	I0728 15:46:22.250983   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:22.250991   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:24.791788   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:24.849171   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:24.880694   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.880706   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:24.880760   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:24.908813   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.908826   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:24.908880   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:24.937412   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.937425   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:24.937484   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:24.966808   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.966819   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:24.966880   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:24.996939   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.996952   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:24.997013   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:25.025856   28750 logs.go:274] 0 containers: []
	W0728 15:46:25.025868   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:25.025927   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:25.054899   28750 logs.go:274] 0 containers: []
	W0728 15:46:25.054911   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:25.054970   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:25.083700   28750 logs.go:274] 0 containers: []
	W0728 15:46:25.083712   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:25.083720   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:25.083729   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:25.097410   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:25.097423   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:27.151701   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054299606s)
	I0728 15:46:27.151808   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:27.151815   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:27.192088   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:27.192102   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:27.203829   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:27.203842   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:27.257399   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:29.758384   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:29.848749   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:29.880250   28750 logs.go:274] 0 containers: []
	W0728 15:46:29.880262   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:29.880318   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:29.910202   28750 logs.go:274] 0 containers: []
	W0728 15:46:29.910215   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:29.910271   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:29.940618   28750 logs.go:274] 0 containers: []
	W0728 15:46:29.940632   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:29.940699   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:29.971567   28750 logs.go:274] 0 containers: []
	W0728 15:46:29.971583   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:29.971645   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:30.004734   28750 logs.go:274] 0 containers: []
	W0728 15:46:30.004750   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:30.004814   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:30.036150   28750 logs.go:274] 0 containers: []
	W0728 15:46:30.036164   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:30.036234   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:30.066088   28750 logs.go:274] 0 containers: []
	W0728 15:46:30.066101   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:30.066156   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:30.095216   28750 logs.go:274] 0 containers: []
	W0728 15:46:30.095228   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:30.095235   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:30.095242   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:30.148425   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:30.152196   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:30.152207   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:30.165693   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:30.165704   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:32.216553   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050871151s)
	I0728 15:46:32.216665   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:32.216673   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:32.259143   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:32.259161   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:34.771261   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:34.850228   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:34.881646   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.881658   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:34.881714   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:34.911053   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.911065   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:34.911120   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:34.940187   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.940199   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:34.940257   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:34.968953   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.968965   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:34.969022   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:34.999346   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.999359   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:34.999415   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:35.028920   28750 logs.go:274] 0 containers: []
	W0728 15:46:35.028933   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:35.028991   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:35.058519   28750 logs.go:274] 0 containers: []
	W0728 15:46:35.058531   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:35.058589   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:35.087805   28750 logs.go:274] 0 containers: []
	W0728 15:46:35.087817   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:35.087824   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:35.087831   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:35.127597   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:35.127610   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:35.140602   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:35.151800   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:35.210991   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:35.211004   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:35.211011   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:35.227071   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:35.227085   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:37.280866   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053801834s)
	I0728 15:46:39.781106   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:39.848000   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:39.879394   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.879406   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:39.879461   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:39.909065   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.909077   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:39.909133   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:39.938272   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.938283   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:39.938346   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:39.967027   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.967044   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:39.967102   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:39.996593   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.996605   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:39.996661   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:40.025955   28750 logs.go:274] 0 containers: []
	W0728 15:46:40.025967   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:40.026023   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:40.054606   28750 logs.go:274] 0 containers: []
	W0728 15:46:40.054618   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:40.054677   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:40.083931   28750 logs.go:274] 0 containers: []
	W0728 15:46:40.083944   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:40.083951   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:40.083958   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:40.122714   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:40.122727   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:40.133764   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:40.151970   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:40.205103   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:40.205113   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:40.205125   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:40.218748   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:40.218759   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:42.277646   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058908911s)
	I0728 15:46:44.779464   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:44.849951   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:44.881165   28750 logs.go:274] 0 containers: []
	W0728 15:46:44.881178   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:44.881238   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:44.909841   28750 logs.go:274] 0 containers: []
	W0728 15:46:44.909855   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:44.909917   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:44.941101   28750 logs.go:274] 0 containers: []
	W0728 15:46:44.941114   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:44.941179   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:44.972307   28750 logs.go:274] 0 containers: []
	W0728 15:46:44.972320   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:44.972376   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:45.006437   28750 logs.go:274] 0 containers: []
	W0728 15:46:45.006450   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:45.006508   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:45.036116   28750 logs.go:274] 0 containers: []
	W0728 15:46:45.036128   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:45.036185   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:45.064214   28750 logs.go:274] 0 containers: []
	W0728 15:46:45.064226   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:45.064286   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:45.093400   28750 logs.go:274] 0 containers: []
	W0728 15:46:45.093414   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:45.093420   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:45.093427   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:45.107382   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:45.107395   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:47.162864   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055491977s)
	I0728 15:46:47.162967   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:47.162974   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:47.205000   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:47.205023   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:47.216942   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:47.216956   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:47.269215   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:49.769983   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:49.849883   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:49.880517   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.880530   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:49.880587   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:49.908888   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.908904   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:49.908964   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:49.937900   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.937914   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:49.937975   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:49.966223   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.966236   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:49.966292   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:49.995275   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.995288   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:49.995344   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:50.025324   28750 logs.go:274] 0 containers: []
	W0728 15:46:50.025338   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:50.025396   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:50.054609   28750 logs.go:274] 0 containers: []
	W0728 15:46:50.054621   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:50.054679   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:50.082727   28750 logs.go:274] 0 containers: []
	W0728 15:46:50.082739   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:50.082746   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:50.082753   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:50.134737   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:50.151600   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:50.151609   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:50.166276   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:50.166289   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:52.220560   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054292421s)
	I0728 15:46:52.220667   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:52.220673   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:52.259245   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:52.259258   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:54.773839   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:54.847624   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:54.877686   28750 logs.go:274] 0 containers: []
	W0728 15:46:54.877698   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:54.877752   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:54.908194   28750 logs.go:274] 0 containers: []
	W0728 15:46:54.908206   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:54.908265   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:54.942839   28750 logs.go:274] 0 containers: []
	W0728 15:46:54.942851   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:54.942904   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:54.977060   28750 logs.go:274] 0 containers: []
	W0728 15:46:54.977072   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:54.977129   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:55.008268   28750 logs.go:274] 0 containers: []
	W0728 15:46:55.008285   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:55.008356   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:55.039796   28750 logs.go:274] 0 containers: []
	W0728 15:46:55.039809   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:55.039870   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:55.070921   28750 logs.go:274] 0 containers: []
	W0728 15:46:55.070933   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:55.070992   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:55.102136   28750 logs.go:274] 0 containers: []
	W0728 15:46:55.102153   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:55.102162   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:55.102171   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:55.144328   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:55.153238   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:55.166460   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:55.166474   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:55.223089   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:55.223101   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:55.223110   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:55.237281   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:55.237300   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:57.291911   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054631983s)
	I0728 15:46:59.792635   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:59.849540   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:59.895169   28750 logs.go:274] 0 containers: []
	W0728 15:46:59.895184   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:59.895245   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:59.928773   28750 logs.go:274] 0 containers: []
	W0728 15:46:59.928796   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:59.928862   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:59.958330   28750 logs.go:274] 0 containers: []
	W0728 15:46:59.958343   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:59.958400   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:59.995745   28750 logs.go:274] 0 containers: []
	W0728 15:46:59.995760   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:59.995825   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:00.026935   28750 logs.go:274] 0 containers: []
	W0728 15:47:00.026948   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:00.027009   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:00.060788   28750 logs.go:274] 0 containers: []
	W0728 15:47:00.060809   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:00.060874   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:00.093846   28750 logs.go:274] 0 containers: []
	W0728 15:47:00.093860   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:00.093918   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:00.124287   28750 logs.go:274] 0 containers: []
	W0728 15:47:00.124299   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:00.124305   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:00.124312   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:47:00.164158   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:47:00.172130   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:47:00.185741   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:47:00.185753   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:47:00.240788   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:47:00.240799   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:47:00.240805   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:47:00.255212   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:47:00.255226   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:47:02.307529   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052325075s)
	I0728 15:47:04.808886   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:47:04.847555   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:47:04.878556   28750 logs.go:274] 0 containers: []
	W0728 15:47:04.878568   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:47:04.878625   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:47:04.908422   28750 logs.go:274] 0 containers: []
	W0728 15:47:04.908435   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:47:04.908490   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:47:04.937750   28750 logs.go:274] 0 containers: []
	W0728 15:47:04.937763   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:47:04.937818   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:47:04.968416   28750 logs.go:274] 0 containers: []
	W0728 15:47:04.968429   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:47:04.968486   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:05.000569   28750 logs.go:274] 0 containers: []
	W0728 15:47:05.000582   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:05.000637   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:05.030908   28750 logs.go:274] 0 containers: []
	W0728 15:47:05.030920   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:05.030975   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:05.060797   28750 logs.go:274] 0 containers: []
	W0728 15:47:05.060809   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:05.060864   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:05.090150   28750 logs.go:274] 0 containers: []
	W0728 15:47:05.090164   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:05.090172   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:47:05.090181   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:47:05.104695   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:47:05.104708   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:47:07.156582   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051896764s)
	I0728 15:47:07.156690   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:07.156697   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:47:07.196373   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:47:07.196391   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:47:07.209900   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:47:07.209913   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:47:07.264664   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:47:09.764763   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:47:09.847630   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:47:09.886156   28750 logs.go:274] 0 containers: []
	W0728 15:47:09.886174   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:47:09.886249   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:47:09.922509   28750 logs.go:274] 0 containers: []
	W0728 15:47:09.922523   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:47:09.922587   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:47:09.957853   28750 logs.go:274] 0 containers: []
	W0728 15:47:09.957866   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:47:09.957925   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:47:09.992158   28750 logs.go:274] 0 containers: []
	W0728 15:47:09.992170   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:47:09.992228   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:10.027962   28750 logs.go:274] 0 containers: []
	W0728 15:47:10.027977   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:10.028038   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:10.071488   28750 logs.go:274] 0 containers: []
	W0728 15:47:10.071502   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:10.071586   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:10.108389   28750 logs.go:274] 0 containers: []
	W0728 15:47:10.108401   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:10.108463   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:10.149102   28750 logs.go:274] 0 containers: []
	W0728 15:47:10.152037   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:10.152048   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:47:10.152058   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:47:12.215163   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.063127s)
	I0728 15:47:12.215269   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:12.215276   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:47:12.260011   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:47:12.260029   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:47:12.275143   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:47:12.275157   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:47:12.338323   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:47:12.338343   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:47:12.338350   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:47:14.853015   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:47:15.347498   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:47:15.381097   28750 logs.go:274] 0 containers: []
	W0728 15:47:15.381110   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:47:15.381166   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:47:15.413976   28750 logs.go:274] 0 containers: []
	W0728 15:47:15.413989   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:47:15.414046   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:47:15.442784   28750 logs.go:274] 0 containers: []
	W0728 15:47:15.442798   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:47:15.442858   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:47:15.471357   28750 logs.go:274] 0 containers: []
	W0728 15:47:15.471370   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:47:15.471432   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:15.500576   28750 logs.go:274] 0 containers: []
	W0728 15:47:15.500588   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:15.500647   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:15.530906   28750 logs.go:274] 0 containers: []
	W0728 15:47:15.530918   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:15.530965   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:15.560677   28750 logs.go:274] 0 containers: []
	W0728 15:47:15.560689   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:15.560760   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:15.589500   28750 logs.go:274] 0 containers: []
	W0728 15:47:15.589512   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:15.589519   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:15.589526   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:47:15.630454   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:47:15.630472   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:47:15.644906   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:47:15.644924   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:47:15.711867   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:47:15.711880   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:47:15.711888   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:47:15.726535   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:47:15.726550   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:47:17.779143   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052616075s)
	I0728 15:47:20.280257   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:47:20.348671   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:47:20.380452   28750 logs.go:274] 0 containers: []
	W0728 15:47:20.380465   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:47:20.380529   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:47:20.410975   28750 logs.go:274] 0 containers: []
	W0728 15:47:20.410988   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:47:20.411043   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:47:20.439207   28750 logs.go:274] 0 containers: []
	W0728 15:47:20.439223   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:47:20.439280   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:47:20.469401   28750 logs.go:274] 0 containers: []
	W0728 15:47:20.469414   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:47:20.469469   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:20.496859   28750 logs.go:274] 0 containers: []
	W0728 15:47:20.496871   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:20.496929   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:20.525871   28750 logs.go:274] 0 containers: []
	W0728 15:47:20.525885   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:20.525941   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:20.555620   28750 logs.go:274] 0 containers: []
	W0728 15:47:20.555632   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:20.555692   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:20.584685   28750 logs.go:274] 0 containers: []
	W0728 15:47:20.584697   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:20.584704   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:20.584712   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:47:20.626679   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:47:20.626694   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:47:20.638940   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:47:20.638952   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:47:20.697439   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:47:20.697451   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:47:20.697460   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:47:20.713568   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:47:20.713579   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:47:22.767261   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05370456s)
	I0728 15:47:25.269590   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:47:25.347274   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:47:25.376053   28750 logs.go:274] 0 containers: []
	W0728 15:47:25.376066   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:47:25.376126   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:47:25.407210   28750 logs.go:274] 0 containers: []
	W0728 15:47:25.407224   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:47:25.407286   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:47:25.436442   28750 logs.go:274] 0 containers: []
	W0728 15:47:25.436454   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:47:25.436512   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:47:25.464444   28750 logs.go:274] 0 containers: []
	W0728 15:47:25.464457   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:47:25.464515   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:25.492505   28750 logs.go:274] 0 containers: []
	W0728 15:47:25.492518   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:25.492573   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:25.521441   28750 logs.go:274] 0 containers: []
	W0728 15:47:25.521454   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:25.521510   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:25.549785   28750 logs.go:274] 0 containers: []
	W0728 15:47:25.549797   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:25.549856   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:25.579762   28750 logs.go:274] 0 containers: []
	W0728 15:47:25.579775   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:25.579781   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:47:25.579788   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:47:25.593374   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:47:25.593387   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:47:27.646987   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053622878s)
	I0728 15:47:27.647095   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:27.647102   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:47:27.700910   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:47:27.700946   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:47:27.716145   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:47:27.716169   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:47:27.777151   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:47:30.279120   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:47:30.348670   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:47:30.382024   28750 logs.go:274] 0 containers: []
	W0728 15:47:30.382037   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:47:30.382102   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:47:30.412545   28750 logs.go:274] 0 containers: []
	W0728 15:47:30.412559   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:47:30.412623   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:47:30.444028   28750 logs.go:274] 0 containers: []
	W0728 15:47:30.444042   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:47:30.444103   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:47:30.473300   28750 logs.go:274] 0 containers: []
	W0728 15:47:30.473316   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:47:30.473381   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:30.503217   28750 logs.go:274] 0 containers: []
	W0728 15:47:30.503230   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:30.503292   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:30.531599   28750 logs.go:274] 0 containers: []
	W0728 15:47:30.531613   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:30.531671   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:30.569704   28750 logs.go:274] 0 containers: []
	W0728 15:47:30.569719   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:30.569778   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:30.598114   28750 logs.go:274] 0 containers: []
	W0728 15:47:30.598128   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:30.598135   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:47:30.598141   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:47:30.612146   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:47:30.612160   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:47:32.667630   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055493088s)
	I0728 15:47:32.667741   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:32.667747   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:47:32.709584   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:47:32.709597   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:47:32.721287   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:47:32.721301   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:47:32.778617   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:47:35.280846   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:47:35.349091   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:47:35.381388   28750 logs.go:274] 0 containers: []
	W0728 15:47:35.381402   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:47:35.381461   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:47:35.411667   28750 logs.go:274] 0 containers: []
	W0728 15:47:35.411680   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:47:35.411736   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:47:35.440735   28750 logs.go:274] 0 containers: []
	W0728 15:47:35.440748   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:47:35.440812   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:47:35.470456   28750 logs.go:274] 0 containers: []
	W0728 15:47:35.470467   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:47:35.470524   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:35.502028   28750 logs.go:274] 0 containers: []
	W0728 15:47:35.502040   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:35.502104   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:35.536357   28750 logs.go:274] 0 containers: []
	W0728 15:47:35.536369   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:35.536428   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:35.568677   28750 logs.go:274] 0 containers: []
	W0728 15:47:35.568694   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:35.568758   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:35.597788   28750 logs.go:274] 0 containers: []
	W0728 15:47:35.597801   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:35.597809   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:47:35.597816   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:47:35.649570   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:47:35.649580   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:47:35.649587   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:47:35.663247   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:47:35.663259   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:47:37.715254   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052017321s)
	I0728 15:47:37.715361   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:37.715367   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:47:37.756370   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:47:37.756384   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:47:40.270186   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:47:40.346896   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:47:40.375375   28750 logs.go:274] 0 containers: []
	W0728 15:47:40.375388   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:47:40.375449   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:47:40.407108   28750 logs.go:274] 0 containers: []
	W0728 15:47:40.407120   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:47:40.407191   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:47:40.438028   28750 logs.go:274] 0 containers: []
	W0728 15:47:40.438040   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:47:40.438100   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:47:40.467590   28750 logs.go:274] 0 containers: []
	W0728 15:47:40.467603   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:47:40.467671   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:40.496212   28750 logs.go:274] 0 containers: []
	W0728 15:47:40.496224   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:40.496282   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:40.525472   28750 logs.go:274] 0 containers: []
	W0728 15:47:40.525484   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:40.525543   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:40.554837   28750 logs.go:274] 0 containers: []
	W0728 15:47:40.554858   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:40.554921   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:40.586204   28750 logs.go:274] 0 containers: []
	W0728 15:47:40.586216   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:40.586223   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:47:40.586230   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:47:40.597686   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:47:40.597698   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:47:40.649070   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:47:40.649083   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:47:40.649089   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:47:40.663072   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:47:40.663084   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:47:42.714156   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051094256s)
	I0728 15:47:42.714261   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:42.714269   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:47:45.254006   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:47:45.346960   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:47:45.376700   28750 logs.go:274] 0 containers: []
	W0728 15:47:45.376712   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:47:45.376770   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:47:45.408069   28750 logs.go:274] 0 containers: []
	W0728 15:47:45.408081   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:47:45.408140   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:47:45.436587   28750 logs.go:274] 0 containers: []
	W0728 15:47:45.436599   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:47:45.436655   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:47:45.468027   28750 logs.go:274] 0 containers: []
	W0728 15:47:45.468040   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:47:45.468096   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:45.497364   28750 logs.go:274] 0 containers: []
	W0728 15:47:45.497379   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:45.497447   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:45.531303   28750 logs.go:274] 0 containers: []
	W0728 15:47:45.531315   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:45.531376   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:45.567228   28750 logs.go:274] 0 containers: []
	W0728 15:47:45.567241   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:45.567300   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:45.597956   28750 logs.go:274] 0 containers: []
	W0728 15:47:45.597969   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:45.597976   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:45.597983   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:47:45.638951   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:47:45.638971   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:47:45.656358   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:47:45.656373   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:47:45.711876   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:47:45.711886   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:47:45.711893   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:47:45.726150   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:47:45.726162   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:47:47.787447   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061305758s)
	I0728 15:47:50.287888   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:47:50.347082   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:47:50.378004   28750 logs.go:274] 0 containers: []
	W0728 15:47:50.378016   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:47:50.378074   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:47:50.406564   28750 logs.go:274] 0 containers: []
	W0728 15:47:50.406576   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:47:50.406632   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:47:50.434462   28750 logs.go:274] 0 containers: []
	W0728 15:47:50.434475   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:47:50.434531   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:47:50.463772   28750 logs.go:274] 0 containers: []
	W0728 15:47:50.463785   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:47:50.463845   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:50.492111   28750 logs.go:274] 0 containers: []
	W0728 15:47:50.492124   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:50.492182   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:50.521096   28750 logs.go:274] 0 containers: []
	W0728 15:47:50.521109   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:50.521167   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:50.549277   28750 logs.go:274] 0 containers: []
	W0728 15:47:50.549289   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:50.549358   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:50.578023   28750 logs.go:274] 0 containers: []
	W0728 15:47:50.578039   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:50.578046   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:47:50.578055   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:47:50.630560   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:47:50.630570   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:47:50.630576   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:47:50.644760   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:47:50.644781   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:47:52.703996   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059232488s)
	I0728 15:47:52.704861   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:52.704987   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:47:52.745303   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:47:52.745317   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:47:55.257401   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:47:55.347175   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:47:55.376341   28750 logs.go:274] 0 containers: []
	W0728 15:47:55.376355   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:47:55.376412   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:47:55.405614   28750 logs.go:274] 0 containers: []
	W0728 15:47:55.405627   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:47:55.405691   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:47:55.435257   28750 logs.go:274] 0 containers: []
	W0728 15:47:55.435271   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:47:55.435327   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:47:55.467550   28750 logs.go:274] 0 containers: []
	W0728 15:47:55.467563   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:47:55.467619   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:55.496545   28750 logs.go:274] 0 containers: []
	W0728 15:47:55.496557   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:55.496613   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:55.528765   28750 logs.go:274] 0 containers: []
	W0728 15:47:55.528777   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:55.528835   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:55.557409   28750 logs.go:274] 0 containers: []
	W0728 15:47:55.557423   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:55.557481   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:55.586995   28750 logs.go:274] 0 containers: []
	W0728 15:47:55.587009   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:55.587017   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:55.587024   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:47:55.626729   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:47:55.626743   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:47:55.638030   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:47:55.638045   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:47:55.690532   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:47:55.690543   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:47:55.690549   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:47:55.704816   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:47:55.704828   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:47:57.758848   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054042949s)
	I0728 15:48:00.259194   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:48:00.269311   28750 kubeadm.go:630] restartCluster took 4m5.283242711s
	W0728 15:48:00.269389   28750 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0728 15:48:00.269403   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0728 15:48:00.690224   28750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:48:00.699541   28750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:48:00.706963   28750 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:48:00.707007   28750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:48:00.714249   28750 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:48:00.714270   28750 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:48:01.429221   28750 out.go:204]   - Generating certificates and keys ...
	I0728 15:48:02.074392   28750 out.go:204]   - Booting up control plane ...
	W0728 15:49:56.989217   28750 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0728 15:49:56.989249   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0728 15:49:57.413331   28750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:49:57.423140   28750 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:49:57.423194   28750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:49:57.430740   28750 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:49:57.430758   28750 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:49:58.137590   28750 out.go:204]   - Generating certificates and keys ...
	I0728 15:49:58.961107   28750 out.go:204]   - Booting up control plane ...
	I0728 15:51:53.880636   28750 kubeadm.go:397] StartCluster complete in 7m58.936593061s
	I0728 15:51:53.880713   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:51:53.913209   28750 logs.go:274] 0 containers: []
	W0728 15:51:53.913222   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:51:53.913282   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:51:53.943409   28750 logs.go:274] 0 containers: []
	W0728 15:51:53.943421   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:51:53.943481   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:51:53.973451   28750 logs.go:274] 0 containers: []
	W0728 15:51:53.973463   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:51:53.973516   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:51:54.002910   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.002922   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:51:54.002981   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:51:54.035653   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.035665   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:51:54.035724   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:51:54.068593   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.068606   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:51:54.068668   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:51:54.098273   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.098285   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:51:54.098344   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:51:54.127232   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.127244   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:51:54.127252   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:51:54.127259   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:51:56.179496   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052257174s)
	I0728 15:51:56.179636   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:51:56.179644   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:51:56.220729   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:51:56.220744   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:51:56.232226   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:51:56.232240   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:51:56.289365   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:51:56.289376   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:51:56.289383   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0728 15:51:56.303663   28750 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0728 15:51:56.303683   28750 out.go:239] * 
	* 
	W0728 15:51:56.303804   28750 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:51:56.303823   28750 out.go:239] * 
	* 
	W0728 15:51:56.304345   28750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 15:51:56.399661   28750 out.go:177] 
	W0728 15:51:56.441944   28750 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:51:56.442070   28750 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0728 15:51:56.442149   28750 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0728 15:51:56.483570   28750 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-20220728153807-12923 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220728153807-12923
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220728153807-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f",
	        "Created": "2022-07-28T22:38:14.165684968Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246485,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:43:51.426692673Z",
	            "FinishedAt": "2022-07-28T22:43:48.536711569Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hosts",
	        "LogPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f-json.log",
	        "Name": "/old-k8s-version-20220728153807-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220728153807-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220728153807-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220728153807-12923",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220728153807-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220728153807-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "468350ddce385e27616eb7d67f293e8984e4658354bccab9cc7f747311c10282",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58970"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58971"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58972"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58973"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/468350ddce38",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220728153807-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2056d9a86a4c",
	                        "old-k8s-version-20220728153807-12923"
	                    ],
	                    "NetworkID": "a0b55590b406427f4aa9e75be1fbe382dd54fa7a1c14e888e401b45bb478b32d",
	                    "EndpointID": "d3216ca95fb05fd9cb589a1b6ef0ebe5edfacf75863c36ec7c40cddaa73c1dc8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 2 (418.829735ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220728153807-12923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220728153807-12923 logs -n 25: (3.525300837s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | kubenet-20220728152330-12923            | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:38 PDT |
	|         | kubenet-20220728152330-12923                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220728152330-12923 | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:37 PDT |
	|         | enable-default-cni-20220728152330-12923           |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220728152330-12923 | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT | 28 Jul 22 15:38 PDT |
	|         | enable-default-cni-20220728152330-12923           |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220728152330-12923            | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT | 28 Jul 22 15:38 PDT |
	|         | kubenet-20220728152330-12923                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220728152330-12923            | jenkins | v1.26.0 | 28 Jul 22 15:39 PDT | 28 Jul 22 15:39 PDT |
	|         | kubenet-20220728152330-12923                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:39 PDT | 28 Jul 22 15:40 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:40 PDT | 28 Jul 22 15:40 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:40 PDT | 28 Jul 22 15:41 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:41 PDT | 28 Jul 22 15:41 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:41 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:42 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT | 28 Jul 22 15:43 PDT |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT | 28 Jul 22 15:43 PDT |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220728154707-12923        | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | embed-certs-20220728154707-12923                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220728154707-12923        | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220728154707-12923        | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220728154707-12923        | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220728154707-12923        | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT |                     |
	|         | embed-certs-20220728154707-12923                  |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 15:48:16
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 15:48:16.801497   29417 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:48:16.801653   29417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:48:16.801659   29417 out.go:309] Setting ErrFile to fd 2...
	I0728 15:48:16.801663   29417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:48:16.801763   29417 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:48:16.802276   29417 out.go:303] Setting JSON to false
	I0728 15:48:16.817752   29417 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9538,"bootTime":1659038958,"procs":351,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:48:16.817842   29417 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:48:16.838782   29417 out.go:177] * [embed-certs-20220728154707-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:48:16.880995   29417 notify.go:193] Checking for updates...
	I0728 15:48:16.901706   29417 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:48:16.927978   29417 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:48:16.949282   29417 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:48:16.971082   29417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:48:16.992933   29417 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:48:17.014719   29417 config.go:178] Loaded profile config "embed-certs-20220728154707-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:48:17.015366   29417 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:48:17.082982   29417 docker.go:137] docker version: linux-20.10.17
	I0728 15:48:17.083098   29417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:48:17.212741   29417 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:48:17.139329135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:48:17.234803   29417 out.go:177] * Using the docker driver based on existing profile
	I0728 15:48:17.256309   29417 start.go:284] selected driver: docker
	I0728 15:48:17.256336   29417 start.go:808] validating driver "docker" against &{Name:embed-certs-20220728154707-12923 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220728154707-12923 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:48:17.256500   29417 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:48:17.259510   29417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:48:17.389758   29417 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:48:17.31689262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:48:17.389932   29417 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:48:17.389950   29417 cni.go:95] Creating CNI manager for ""
	I0728 15:48:17.389959   29417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:48:17.389967   29417 start_flags.go:310] config:
	{Name:embed-certs-20220728154707-12923 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220728154707-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:48:17.433599   29417 out.go:177] * Starting control plane node embed-certs-20220728154707-12923 in cluster embed-certs-20220728154707-12923
	I0728 15:48:17.455669   29417 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:48:17.477751   29417 out.go:177] * Pulling base image ...
	I0728 15:48:17.519730   29417 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:48:17.519790   29417 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:48:17.519811   29417 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 15:48:17.519834   29417 cache.go:57] Caching tarball of preloaded images
	I0728 15:48:17.520022   29417 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:48:17.520061   29417 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 15:48:17.521034   29417 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/config.json ...
	I0728 15:48:17.586890   29417 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:48:17.586906   29417 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:48:17.586916   29417 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:48:17.586967   29417 start.go:370] acquiring machines lock for embed-certs-20220728154707-12923: {Name:mkafc927efa8de6adf00771129c22ebc3d05578e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:48:17.587045   29417 start.go:374] acquired machines lock for "embed-certs-20220728154707-12923" in 61.043µs
	I0728 15:48:17.587065   29417 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:48:17.587073   29417 fix.go:55] fixHost starting: 
	I0728 15:48:17.587306   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:48:17.651525   29417 fix.go:103] recreateIfNeeded on embed-certs-20220728154707-12923: state=Stopped err=<nil>
	W0728 15:48:17.651560   29417 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:48:17.673382   29417 out.go:177] * Restarting existing docker container for "embed-certs-20220728154707-12923" ...
	I0728 15:48:17.694189   29417 cli_runner.go:164] Run: docker start embed-certs-20220728154707-12923
	I0728 15:48:18.024290   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:48:18.089501   29417 kic.go:415] container "embed-certs-20220728154707-12923" state is running.
	I0728 15:48:18.090096   29417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220728154707-12923
	I0728 15:48:18.159136   29417 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/config.json ...
	I0728 15:48:18.159526   29417 machine.go:88] provisioning docker machine ...
	I0728 15:48:18.159550   29417 ubuntu.go:169] provisioning hostname "embed-certs-20220728154707-12923"
	I0728 15:48:18.159621   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:18.226433   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:18.226656   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:18.226667   29417 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220728154707-12923 && echo "embed-certs-20220728154707-12923" | sudo tee /etc/hostname
	I0728 15:48:18.356370   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220728154707-12923
	
	I0728 15:48:18.356474   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:18.422345   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:18.422505   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:18.422519   29417 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220728154707-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220728154707-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220728154707-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:48:18.541839   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:48:18.541862   29417 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:48:18.541893   29417 ubuntu.go:177] setting up certificates
	I0728 15:48:18.541906   29417 provision.go:83] configureAuth start
	I0728 15:48:18.541981   29417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220728154707-12923
	I0728 15:48:18.607419   29417 provision.go:138] copyHostCerts
	I0728 15:48:18.607510   29417 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:48:18.607520   29417 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:48:18.607611   29417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:48:18.607810   29417 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:48:18.607820   29417 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:48:18.607885   29417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:48:18.608037   29417 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:48:18.608043   29417 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:48:18.608100   29417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:48:18.608266   29417 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220728154707-12923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220728154707-12923]
	I0728 15:48:18.782136   29417 provision.go:172] copyRemoteCerts
	I0728 15:48:18.782203   29417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:48:18.782257   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:18.846797   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:18.934758   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:48:18.952068   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0728 15:48:18.968832   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 15:48:18.984908   29417 provision.go:86] duration metric: configureAuth took 442.991274ms
	I0728 15:48:18.984926   29417 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:48:18.985086   29417 config.go:178] Loaded profile config "embed-certs-20220728154707-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:48:18.985144   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.049284   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:19.049430   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:19.049439   29417 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:48:19.169479   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:48:19.169492   29417 ubuntu.go:71] root file system type: overlay
	I0728 15:48:19.169633   29417 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:48:19.169705   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.234334   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:19.234474   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:19.234535   29417 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:48:19.363314   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:48:19.363389   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.474028   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:19.474168   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:19.474181   29417 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:48:19.600015   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:48:19.600035   29417 machine.go:91] provisioned docker machine in 1.440525827s
	I0728 15:48:19.600045   29417 start.go:307] post-start starting for "embed-certs-20220728154707-12923" (driver="docker")
	I0728 15:48:19.600050   29417 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:48:19.600116   29417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:48:19.600159   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.664230   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:19.753490   29417 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:48:19.756748   29417 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:48:19.756763   29417 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:48:19.756771   29417 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:48:19.756775   29417 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:48:19.756786   29417 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:48:19.756892   29417 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:48:19.757034   29417 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:48:19.757184   29417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:48:19.764556   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:48:19.780836   29417 start.go:310] post-start completed in 180.786416ms
	I0728 15:48:19.780924   29417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:48:19.780983   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.845831   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:19.931331   29417 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:48:19.935737   29417 fix.go:57] fixHost completed within 2.348703259s
	I0728 15:48:19.935747   29417 start.go:82] releasing machines lock for "embed-certs-20220728154707-12923", held for 2.348733925s
	I0728 15:48:19.935815   29417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220728154707-12923
	I0728 15:48:19.999291   29417 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:48:19.999293   29417 ssh_runner.go:195] Run: systemctl --version
	I0728 15:48:19.999351   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.999350   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:20.066602   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:20.066639   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:20.336780   29417 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:48:20.345950   29417 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:48:20.346003   29417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:48:20.357290   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:48:20.369623   29417 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:48:20.444639   29417 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:48:20.511294   29417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:48:20.570419   29417 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:48:20.816253   29417 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 15:48:20.888550   29417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:48:20.958099   29417 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 15:48:20.967785   29417 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 15:48:20.967856   29417 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 15:48:20.971649   29417 start.go:471] Will wait 60s for crictl version
	I0728 15:48:20.971697   29417 ssh_runner.go:195] Run: sudo crictl version
	I0728 15:48:21.074684   29417 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 15:48:21.074748   29417 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:48:21.109656   29417 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:48:21.190920   29417 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 15:48:21.191106   29417 cli_runner.go:164] Run: docker exec -t embed-certs-20220728154707-12923 dig +short host.docker.internal
	I0728 15:48:21.315815   29417 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:48:21.315918   29417 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:48:21.320085   29417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:48:21.329958   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:21.394974   29417 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:48:21.395041   29417 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:48:21.425612   29417 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0728 15:48:21.425628   29417 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:48:21.425703   29417 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:48:21.455867   29417 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0728 15:48:21.455887   29417 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:48:21.455976   29417 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:48:21.528212   29417 cni.go:95] Creating CNI manager for ""
	I0728 15:48:21.528224   29417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:48:21.528238   29417 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:48:21.528250   29417 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220728154707-12923 NodeName:embed-certs-20220728154707-12923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:48:21.528359   29417 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220728154707-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:48:21.528439   29417 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220728154707-12923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220728154707-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:48:21.528500   29417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 15:48:21.536494   29417 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:48:21.536551   29417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:48:21.543737   29417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0728 15:48:21.556138   29417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:48:21.568510   29417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0728 15:48:21.580344   29417 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:48:21.583879   29417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:48:21.593400   29417 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923 for IP: 192.168.67.2
	I0728 15:48:21.593521   29417 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:48:21.593573   29417 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:48:21.593648   29417 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/client.key
	I0728 15:48:21.593716   29417 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/apiserver.key.c7fa3a9e
	I0728 15:48:21.593765   29417 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/proxy-client.key
	I0728 15:48:21.593961   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:48:21.593997   29417 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:48:21.594013   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:48:21.594046   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:48:21.594075   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:48:21.594102   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:48:21.594168   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:48:21.594672   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:48:21.611272   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 15:48:21.627747   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:48:21.644521   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 15:48:21.661383   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:48:21.677553   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:48:21.693614   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:48:21.710372   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:48:21.727241   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:48:21.743529   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:48:21.760108   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:48:21.776707   29417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:48:21.790531   29417 ssh_runner.go:195] Run: openssl version
	I0728 15:48:21.796365   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:48:21.803971   29417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:48:21.829542   29417 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:48:21.829588   29417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:48:21.834653   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:48:21.841901   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:48:21.849320   29417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:48:21.853199   29417 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:48:21.853254   29417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:48:21.858491   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:48:21.865400   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:48:21.872773   29417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:48:21.876616   29417 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:48:21.876657   29417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:48:21.881915   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:48:21.888912   29417 kubeadm.go:395] StartCluster: {Name:embed-certs-20220728154707-12923 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220728154707-12923 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:48:21.889007   29417 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:48:21.917473   29417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:48:21.925062   29417 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 15:48:21.925078   29417 kubeadm.go:626] restartCluster start
	I0728 15:48:21.925120   29417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 15:48:21.931678   29417 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:21.931736   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:21.995670   29417 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220728154707-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:48:21.995837   29417 kubeconfig.go:127] "embed-certs-20220728154707-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 15:48:21.996174   29417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:48:21.997302   29417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 15:48:22.004907   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.004963   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.013141   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:22.215267   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.215476   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.226779   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:22.415295   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.415493   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.426659   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:22.614923   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.615024   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.625988   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:22.813546   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.813626   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.822993   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.015317   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.015544   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.025792   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.215277   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.215416   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.226602   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.413566   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.413660   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.424136   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.613588   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.613654   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.622662   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.814664   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.814775   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.825692   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.015300   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.015423   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.026137   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.215292   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.215465   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.226205   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.413410   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.413583   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.423694   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.614714   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.614873   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.625456   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.814430   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.814534   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.825126   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.013304   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:25.013447   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:25.024258   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.024268   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:25.024313   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:25.032119   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.032130   29417 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 15:48:25.032138   29417 kubeadm.go:1092] stopping kube-system containers ...
	I0728 15:48:25.032191   29417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:48:25.061839   29417 docker.go:443] Stopping containers: [29457d4df313 6ef1cce588df 7d5bc5e389db 8302d6c6443f cdfeb1ef5523 0a683159e91c 25acc0420e55 d443c52be75c a610b508b031 06e6110a99f1 8e2b7312dd3c 7c97803e2ece 4ef5ce52c329 2f78d4daa24c 6855ea1ad5cf f6b3a99aae08]
	I0728 15:48:25.061917   29417 ssh_runner.go:195] Run: docker stop 29457d4df313 6ef1cce588df 7d5bc5e389db 8302d6c6443f cdfeb1ef5523 0a683159e91c 25acc0420e55 d443c52be75c a610b508b031 06e6110a99f1 8e2b7312dd3c 7c97803e2ece 4ef5ce52c329 2f78d4daa24c 6855ea1ad5cf f6b3a99aae08
	I0728 15:48:25.098171   29417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 15:48:25.111390   29417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:48:25.119798   29417 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 28 22:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 28 22:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jul 28 22:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 28 22:47 /etc/kubernetes/scheduler.conf
	
	I0728 15:48:25.119853   29417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 15:48:25.127053   29417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 15:48:25.134998   29417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 15:48:25.142975   29417 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.143035   29417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 15:48:25.150202   29417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 15:48:25.157246   29417 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.157293   29417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 15:48:25.164016   29417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:48:25.171050   29417 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 15:48:25.171059   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:25.216093   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:25.844343   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:26.021662   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:26.072258   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:26.134783   29417 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:48:26.134841   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:48:26.644326   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:48:27.144282   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:48:27.154756   29417 api_server.go:71] duration metric: took 1.019990602s to wait for apiserver process to appear ...
	I0728 15:48:27.154775   29417 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:48:27.154789   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:27.155957   29417 api_server.go:256] stopped: https://127.0.0.1:59133/healthz: Get "https://127.0.0.1:59133/healthz": EOF
	I0728 15:48:27.657022   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:30.452815   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 15:48:30.452848   29417 api_server.go:102] status: https://127.0.0.1:59133/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 15:48:30.657052   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:30.662787   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:48:30.662799   29417 api_server.go:102] status: https://127.0.0.1:59133/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:48:31.156224   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:31.161953   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:48:31.161970   29417 api_server.go:102] status: https://127.0.0.1:59133/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:48:31.656203   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:31.664096   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 200:
	ok
	I0728 15:48:31.670530   29417 api_server.go:140] control plane version: v1.24.3
	I0728 15:48:31.670542   29417 api_server.go:130] duration metric: took 4.515837693s to wait for apiserver health ...
	I0728 15:48:31.670547   29417 cni.go:95] Creating CNI manager for ""
	I0728 15:48:31.670552   29417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:48:31.670562   29417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:48:31.676716   29417 system_pods.go:59] 8 kube-system pods found
	I0728 15:48:31.676732   29417 system_pods.go:61] "coredns-6d4b75cb6d-sz4ss" [b1735e46-67cb-4a2a-9a12-260c98968b3a] Running
	I0728 15:48:31.676746   29417 system_pods.go:61] "etcd-embed-certs-20220728154707-12923" [a389e720-76d6-499e-b34e-3f8013bce707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 15:48:31.676751   29417 system_pods.go:61] "kube-apiserver-embed-certs-20220728154707-12923" [b5ea0f7a-5c6d-41c4-8e7c-f86e90a70222] Running
	I0728 15:48:31.676757   29417 system_pods.go:61] "kube-controller-manager-embed-certs-20220728154707-12923" [9c192195-2527-4438-aa9b-bffc0aebccd1] Running
	I0728 15:48:31.676760   29417 system_pods.go:61] "kube-proxy-hhj48" [11442494-68d4-468e-b506-0302c7692a8d] Running
	I0728 15:48:31.676765   29417 system_pods.go:61] "kube-scheduler-embed-certs-20220728154707-12923" [37f5c49d-8386-4440-ba01-f9d4a3eb7d05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 15:48:31.676772   29417 system_pods.go:61] "metrics-server-5c6f97fb75-b525p" [1aad746e-e8e5-44ae-a006-2655a20b240b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:48:31.676777   29417 system_pods.go:61] "storage-provisioner" [0e4110a8-11ee-4fe9-8b5e-6874c4466099] Running
	I0728 15:48:31.676780   29417 system_pods.go:74] duration metric: took 6.214305ms to wait for pod list to return data ...
	I0728 15:48:31.676788   29417 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:48:31.679407   29417 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:48:31.679420   29417 node_conditions.go:123] node cpu capacity is 6
	I0728 15:48:31.679429   29417 node_conditions.go:105] duration metric: took 2.637683ms to run NodePressure ...
	I0728 15:48:31.679439   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:31.806182   29417 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0728 15:48:31.829087   29417 kubeadm.go:777] kubelet initialised
	I0728 15:48:31.829099   29417 kubeadm.go:778] duration metric: took 4.724763ms waiting for restarted kubelet to initialise ...
	I0728 15:48:31.829106   29417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:48:31.835265   29417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-sz4ss" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:31.839853   29417 pod_ready.go:92] pod "coredns-6d4b75cb6d-sz4ss" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:31.839863   29417 pod_ready.go:81] duration metric: took 4.583461ms waiting for pod "coredns-6d4b75cb6d-sz4ss" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:31.839869   29417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:33.853326   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:36.349932   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:38.351762   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:40.850107   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:42.852862   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:44.852267   29417 pod_ready.go:92] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:44.852279   29417 pod_ready.go:81] duration metric: took 13.012622945s waiting for pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:44.852286   29417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:44.857375   29417 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:44.857384   29417 pod_ready.go:81] duration metric: took 5.09412ms waiting for pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:44.857392   29417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:46.867189   29417 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:47.367656   29417 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:47.367668   29417 pod_ready.go:81] duration metric: took 2.510313483s waiting for pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.367678   29417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hhj48" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.371404   29417 pod_ready.go:92] pod "kube-proxy-hhj48" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:47.371412   29417 pod_ready.go:81] duration metric: took 3.727786ms waiting for pod "kube-proxy-hhj48" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.371420   29417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.375602   29417 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:47.375609   29417 pod_ready.go:81] duration metric: took 4.18498ms waiting for pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.375616   29417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:49.387717   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:51.885666   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:53.886054   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:56.384883   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:58.388307   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:00.887815   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:03.384496   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:05.387715   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:07.885515   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:09.885979   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:11.887797   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:14.388045   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:16.885529   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:18.885542   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:20.887317   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:23.386722   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:25.387475   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:27.885734   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:29.887218   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:32.385298   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:34.884952   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:36.886061   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:38.887042   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:41.384351   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:43.385883   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:45.885734   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:48.387488   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:50.888333   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:53.387254   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:55.883448   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	W0728 15:49:56.989217   28750 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0728 15:49:56.989249   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0728 15:49:57.413331   28750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:49:57.423140   28750 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:49:57.423194   28750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:49:57.430740   28750 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:49:57.430758   28750 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:49:58.137590   28750 out.go:204]   - Generating certificates and keys ...
	I0728 15:49:58.961107   28750 out.go:204]   - Booting up control plane ...
	I0728 15:49:57.884245   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:59.887006   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:02.387156   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:04.886992   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:07.387540   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:09.884586   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:11.887663   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:14.385348   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:16.386759   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:18.885993   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:21.386518   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:23.883548   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:25.883909   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:27.885135   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:29.886340   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:32.385238   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:34.885505   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:36.885858   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:39.385952   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:41.386558   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:43.884916   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:46.385722   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:48.885851   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:51.386119   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:53.885644   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:56.384861   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:58.385124   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:00.385353   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:02.884249   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:04.885728   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:07.385528   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:09.885039   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:11.885545   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:14.385339   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:16.885351   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:18.887968   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:21.382353   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:23.385169   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:25.885193   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:28.385639   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:30.885396   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:33.384905   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:35.884757   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:38.382709   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:40.385773   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:42.883260   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:45.384114   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:47.883140   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:49.883283   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:53.880636   28750 kubeadm.go:397] StartCluster complete in 7m58.936593061s
	I0728 15:51:53.880713   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:51:53.913209   28750 logs.go:274] 0 containers: []
	W0728 15:51:53.913222   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:51:53.913282   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:51:53.943409   28750 logs.go:274] 0 containers: []
	W0728 15:51:53.943421   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:51:53.943481   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:51:53.973451   28750 logs.go:274] 0 containers: []
	W0728 15:51:53.973463   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:51:53.973516   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:51:54.002910   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.002922   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:51:54.002981   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:51:54.035653   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.035665   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:51:54.035724   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:51:54.068593   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.068606   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:51:54.068668   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:51:54.098273   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.098285   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:51:54.098344   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:51:54.127232   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.127244   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:51:54.127252   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:51:54.127259   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:51:56.179496   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052257174s)
	I0728 15:51:56.179636   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:51:56.179644   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:51:56.220729   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:51:56.220744   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:51:56.232226   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:51:56.232240   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:51:56.289365   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:51:56.289376   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:51:56.289383   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0728 15:51:56.303663   28750 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0728 15:51:56.303683   28750 out.go:239] * 
	W0728 15:51:56.303804   28750 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:51:56.303823   28750 out.go:239] * 
	W0728 15:51:56.304345   28750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 15:51:56.399661   28750 out.go:177] 
	W0728 15:51:56.441944   28750 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:51:56.442070   28750 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0728 15:51:56.442149   28750 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0728 15:51:56.483570   28750 out.go:177] 
	I0728 15:51:51.885062   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:54.384742   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:56.405584   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 22:43:51 UTC, end at Thu 2022-07-28 22:51:57 UTC. --
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[130]: time="2022-07-28T22:43:53.942690810Z" level=info msg="Processing signal 'terminated'"
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[130]: time="2022-07-28T22:43:53.943578596Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[130]: time="2022-07-28T22:43:53.944089211Z" level=info msg="Daemon shutdown complete"
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[130]: time="2022-07-28T22:43:53.944161741Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 systemd[1]: docker.service: Succeeded.
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 systemd[1]: Stopped Docker Application Container Engine.
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 systemd[1]: Starting Docker Application Container Engine...
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.996727993Z" level=info msg="Starting up"
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.998837785Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.998874628Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.998894523Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.998901889Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.999936587Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.999985502Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.999998161Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.000004378Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.003470166Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.008187672Z" level=info msg="Loading containers: start."
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.081875363Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.110811702Z" level=info msg="Loading containers: done."
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.118880813Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.118939961Z" level=info msg="Daemon has completed initialization"
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 systemd[1]: Started Docker Application Container Engine.
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.140764725Z" level=info msg="API listen on [::]:2376"
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.143233308Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-07-28T22:52:00Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  22:52:00 up  1:12,  0 users,  load average: 0.43, 0.70, 0.89
	Linux old-k8s-version-20220728153807-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 22:43:51 UTC, end at Thu 2022-07-28 22:52:00 UTC. --
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 kubelet[14426]: I0728 22:51:59.727145   14426 server.go:410] Version: v1.16.0
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 kubelet[14426]: I0728 22:51:59.727490   14426 plugins.go:100] No cloud provider specified.
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 kubelet[14426]: I0728 22:51:59.727544   14426 server.go:773] Client rotation is on, will bootstrap in background
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 kubelet[14426]: I0728 22:51:59.729072   14426 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 kubelet[14426]: W0728 22:51:59.729643   14426 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 kubelet[14426]: W0728 22:51:59.729729   14426 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 kubelet[14426]: F0728 22:51:59.729792   14426 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 28 22:51:59 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 28 22:52:00 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Jul 28 22:52:00 old-k8s-version-20220728153807-12923 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 28 22:52:00 old-k8s-version-20220728153807-12923 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 28 22:52:00 old-k8s-version-20220728153807-12923 kubelet[14463]: I0728 22:52:00.506321   14463 server.go:410] Version: v1.16.0
	Jul 28 22:52:00 old-k8s-version-20220728153807-12923 kubelet[14463]: I0728 22:52:00.506718   14463 plugins.go:100] No cloud provider specified.
	Jul 28 22:52:00 old-k8s-version-20220728153807-12923 kubelet[14463]: I0728 22:52:00.506750   14463 server.go:773] Client rotation is on, will bootstrap in background
	Jul 28 22:52:00 old-k8s-version-20220728153807-12923 kubelet[14463]: I0728 22:52:00.509127   14463 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 28 22:52:00 old-k8s-version-20220728153807-12923 kubelet[14463]: W0728 22:52:00.510060   14463 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 28 22:52:00 old-k8s-version-20220728153807-12923 kubelet[14463]: W0728 22:52:00.510129   14463 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 28 22:52:00 old-k8s-version-20220728153807-12923 kubelet[14463]: F0728 22:52:00.510155   14463 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 28 22:52:00 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 28 22:52:00 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:52:00.257385   29704 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 2 (419.399229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220728153807-12923" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (491.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (43.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220728153949-12923 --alsologtostderr -v=1
E0728 15:46:20.454371   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923: exit status 2 (16.079988087s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923
E0728 15:46:52.847938   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923: exit status 2 (16.082754592s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220728153949-12923 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220728153949-12923
helpers_test.go:235: (dbg) docker inspect no-preload-20220728153949-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f8adb766de93f397568039069c5b097806455f11713d0343f33bb401ef0b502",
	        "Created": "2022-07-28T22:39:51.528852294Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238589,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:41:04.982223707Z",
	            "FinishedAt": "2022-07-28T22:41:02.998662599Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/3f8adb766de93f397568039069c5b097806455f11713d0343f33bb401ef0b502/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f8adb766de93f397568039069c5b097806455f11713d0343f33bb401ef0b502/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f8adb766de93f397568039069c5b097806455f11713d0343f33bb401ef0b502/hosts",
	        "LogPath": "/var/lib/docker/containers/3f8adb766de93f397568039069c5b097806455f11713d0343f33bb401ef0b502/3f8adb766de93f397568039069c5b097806455f11713d0343f33bb401ef0b502-json.log",
	        "Name": "/no-preload-20220728153949-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220728153949-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220728153949-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/70815aca0066c90e611ae47b7e08eb4cad8229f27f8910d38dcfd7c0ca62b8fe-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/70815aca0066c90e611ae47b7e08eb4cad8229f27f8910d38dcfd7c0ca62b8fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/70815aca0066c90e611ae47b7e08eb4cad8229f27f8910d38dcfd7c0ca62b8fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/70815aca0066c90e611ae47b7e08eb4cad8229f27f8910d38dcfd7c0ca62b8fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220728153949-12923",
	                "Source": "/var/lib/docker/volumes/no-preload-20220728153949-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220728153949-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220728153949-12923",
	                "name.minikube.sigs.k8s.io": "no-preload-20220728153949-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2bad7230a6f13a83826a3bbc7a594991b8d8737e0708569e889a37a79c4c6eef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58932"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58933"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58934"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58935"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58936"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2bad7230a6f1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220728153949-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3f8adb766de9",
	                        "no-preload-20220728153949-12923"
	                    ],
	                    "NetworkID": "790a06655bfe3414e1077459bcf3050a64dcee1b7d41236d506cad966a591457",
	                    "EndpointID": "8f65d24e088b3853630d1b1e0bb6dca05b0f3417d6812f78bf18635869ea87cd",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220728153949-12923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220728153949-12923 logs -n 25: (2.782564256s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p false-20220728152331-12923                     | false-20220728152331-12923              | jenkins | v1.26.0 | 28 Jul 22 15:36 PDT | 28 Jul 22 15:37 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=false                     |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| start   | -p bridge-20220728152330-12923                    | bridge-20220728152330-12923             | jenkins | v1.26.0 | 28 Jul 22 15:36 PDT | 28 Jul 22 15:36 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p bridge-20220728152330-12923                    | bridge-20220728152330-12923             | jenkins | v1.26.0 | 28 Jul 22 15:36 PDT | 28 Jul 22 15:36 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p bridge-20220728152330-12923                    | bridge-20220728152330-12923             | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:37 PDT |
	| start   | -p                                                | enable-default-cni-20220728152330-12923 | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:37 PDT |
	|         | enable-default-cni-20220728152330-12923           |                                         |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p false-20220728152331-12923                     | false-20220728152331-12923              | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:37 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p false-20220728152331-12923                     | false-20220728152331-12923              | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:37 PDT |
	| start   | -p                                                | kubenet-20220728152330-12923            | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:38 PDT |
	|         | kubenet-20220728152330-12923                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220728152330-12923 | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:37 PDT |
	|         | enable-default-cni-20220728152330-12923           |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220728152330-12923 | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT | 28 Jul 22 15:38 PDT |
	|         | enable-default-cni-20220728152330-12923           |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220728152330-12923            | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT | 28 Jul 22 15:38 PDT |
	|         | kubenet-20220728152330-12923                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220728152330-12923            | jenkins | v1.26.0 | 28 Jul 22 15:39 PDT | 28 Jul 22 15:39 PDT |
	|         | kubenet-20220728152330-12923                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:39 PDT | 28 Jul 22 15:40 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:40 PDT | 28 Jul 22 15:40 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:40 PDT | 28 Jul 22 15:41 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:41 PDT | 28 Jul 22 15:41 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:41 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:42 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT | 28 Jul 22 15:43 PDT |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT | 28 Jul 22 15:43 PDT |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 15:43:50
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 15:43:50.132817   28750 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:43:50.132989   28750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:43:50.132995   28750 out.go:309] Setting ErrFile to fd 2...
	I0728 15:43:50.133000   28750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:43:50.133108   28750 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:43:50.133582   28750 out.go:303] Setting JSON to false
	I0728 15:43:50.149553   28750 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9272,"bootTime":1659038958,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:43:50.149639   28750 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:43:50.171234   28750 out.go:177] * [old-k8s-version-20220728153807-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:43:50.193226   28750 notify.go:193] Checking for updates...
	I0728 15:43:50.215046   28750 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:43:50.237023   28750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:43:50.257931   28750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:43:50.279132   28750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:43:50.301171   28750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:43:50.323702   28750 config.go:178] Loaded profile config "old-k8s-version-20220728153807-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0728 15:43:50.345915   28750 out.go:177] * Kubernetes 1.24.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.3
	I0728 15:43:50.367017   28750 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:43:50.437569   28750 docker.go:137] docker version: linux-20.10.17
	I0728 15:43:50.437729   28750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:43:50.569692   28750 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:43:50.498204227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:43:50.591689   28750 out.go:177] * Using the docker driver based on existing profile
	I0728 15:43:50.613510   28750 start.go:284] selected driver: docker
	I0728 15:43:50.613538   28750 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220728153807-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:43:50.613718   28750 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:43:50.617013   28750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:43:50.747972   28750 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:43:50.676530285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:43:50.748120   28750 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:43:50.748138   28750 cni.go:95] Creating CNI manager for ""
	I0728 15:43:50.748148   28750 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:43:50.748159   28750 start_flags.go:310] config:
	{Name:old-k8s-version-20220728153807-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:43:50.791553   28750 out.go:177] * Starting control plane node old-k8s-version-20220728153807-12923 in cluster old-k8s-version-20220728153807-12923
	I0728 15:43:50.812795   28750 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:43:50.834772   28750 out.go:177] * Pulling base image ...
	I0728 15:43:50.876918   28750 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:43:50.876976   28750 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:43:50.877001   28750 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0728 15:43:50.877022   28750 cache.go:57] Caching tarball of preloaded images
	I0728 15:43:50.877208   28750 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:43:50.877230   28750 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0728 15:43:50.878252   28750 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/config.json ...
	I0728 15:43:50.941312   28750 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:43:50.941328   28750 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:43:50.941340   28750 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:43:50.941397   28750 start.go:370] acquiring machines lock for old-k8s-version-20220728153807-12923: {Name:mke15a14ac0b96e8c97ba263723c52eb5c7e7def Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:43:50.941474   28750 start.go:374] acquired machines lock for "old-k8s-version-20220728153807-12923" in 57.265µs
	I0728 15:43:50.941495   28750 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:43:50.941503   28750 fix.go:55] fixHost starting: 
	I0728 15:43:50.941727   28750 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220728153807-12923 --format={{.State.Status}}
	I0728 15:43:51.004580   28750 fix.go:103] recreateIfNeeded on old-k8s-version-20220728153807-12923: state=Stopped err=<nil>
	W0728 15:43:51.004619   28750 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:43:51.026654   28750 out.go:177] * Restarting existing docker container for "old-k8s-version-20220728153807-12923" ...
	I0728 15:43:50.398263   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:43:52.897470   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:43:51.069483   28750 cli_runner.go:164] Run: docker start old-k8s-version-20220728153807-12923
	I0728 15:43:51.432239   28750 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220728153807-12923 --format={{.State.Status}}
	I0728 15:43:51.497121   28750 kic.go:415] container "old-k8s-version-20220728153807-12923" state is running.
	I0728 15:43:51.497698   28750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220728153807-12923
	I0728 15:43:51.568555   28750 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/config.json ...
	I0728 15:43:51.568955   28750 machine.go:88] provisioning docker machine ...
	I0728 15:43:51.568976   28750 ubuntu.go:169] provisioning hostname "old-k8s-version-20220728153807-12923"
	I0728 15:43:51.569046   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:51.636172   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:51.636370   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:51.636385   28750 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220728153807-12923 && echo "old-k8s-version-20220728153807-12923" | sudo tee /etc/hostname
	I0728 15:43:51.762903   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220728153807-12923
	
	I0728 15:43:51.762993   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:51.828455   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:51.828606   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:51.828621   28750 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220728153807-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220728153807-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220728153807-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:43:51.949269   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:43:51.949293   28750 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:43:51.949317   28750 ubuntu.go:177] setting up certificates
	I0728 15:43:51.949328   28750 provision.go:83] configureAuth start
	I0728 15:43:51.949396   28750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220728153807-12923
	I0728 15:43:52.013262   28750 provision.go:138] copyHostCerts
	I0728 15:43:52.013379   28750 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:43:52.013389   28750 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:43:52.013487   28750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:43:52.013675   28750 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:43:52.013683   28750 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:43:52.013741   28750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:43:52.013881   28750 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:43:52.013887   28750 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:43:52.013945   28750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:43:52.014068   28750 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220728153807-12923 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220728153807-12923]
	I0728 15:43:52.162837   28750 provision.go:172] copyRemoteCerts
	I0728 15:43:52.162892   28750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:43:52.162936   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:52.226854   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:52.314899   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:43:52.331775   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0728 15:43:52.349209   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 15:43:52.366683   28750 provision.go:86] duration metric: configureAuth took 417.345293ms
	I0728 15:43:52.366697   28750 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:43:52.366840   28750 config.go:178] Loaded profile config "old-k8s-version-20220728153807-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0728 15:43:52.366907   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:52.432300   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:52.432458   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:52.432469   28750 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:43:52.556064   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:43:52.556075   28750 ubuntu.go:71] root file system type: overlay
	I0728 15:43:52.556206   28750 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:43:52.556278   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:52.620853   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:52.621084   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:52.621129   28750 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:43:52.751843   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:43:52.751916   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:52.816883   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:52.817041   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:52.817055   28750 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:43:52.941836   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:43:52.941853   28750 machine.go:91] provisioned docker machine in 1.372912502s
	I0728 15:43:52.941863   28750 start.go:307] post-start starting for "old-k8s-version-20220728153807-12923" (driver="docker")
	I0728 15:43:52.941870   28750 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:43:52.941934   28750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:43:52.941995   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:53.006600   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:53.094280   28750 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:43:53.100080   28750 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:43:53.100098   28750 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:43:53.100105   28750 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:43:53.100109   28750 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:43:53.100119   28750 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:43:53.100242   28750 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:43:53.100374   28750 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:43:53.100517   28750 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:43:53.109632   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:43:53.126762   28750 start.go:310] post-start completed in 184.891915ms
	I0728 15:43:53.126836   28750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:43:53.126883   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:53.191616   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:53.276705   28750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:43:53.281015   28750 fix.go:57] fixHost completed within 2.33954993s
	I0728 15:43:53.281029   28750 start.go:82] releasing machines lock for "old-k8s-version-20220728153807-12923", held for 2.339584988s
	I0728 15:43:53.281105   28750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220728153807-12923
	I0728 15:43:53.345999   28750 ssh_runner.go:195] Run: systemctl --version
	I0728 15:43:53.346002   28750 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:43:53.346069   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:53.346083   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:53.415502   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:53.416382   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:53.693282   28750 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:43:53.703210   28750 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:43:53.703267   28750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:43:53.715068   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:43:53.728140   28750 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:43:53.798778   28750 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:43:53.864441   28750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:43:53.929027   28750 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:43:54.130959   28750 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:43:54.167626   28750 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:43:54.246239   28750 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0728 15:43:54.246432   28750 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220728153807-12923 dig +short host.docker.internal
	I0728 15:43:54.362961   28750 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:43:54.363076   28750 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:43:54.367718   28750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:43:54.377807   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:54.476552   28750 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:43:54.476614   28750 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:43:54.506826   28750 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0728 15:43:54.506844   28750 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:43:54.506923   28750 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:43:54.537701   28750 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0728 15:43:54.537724   28750 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:43:54.537804   28750 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:43:54.609845   28750 cni.go:95] Creating CNI manager for ""
	I0728 15:43:54.609857   28750 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:43:54.609873   28750 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:43:54.609888   28750 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220728153807-12923 NodeName:old-k8s-version-20220728153807-12923 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:43:54.610015   28750 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220728153807-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220728153807-12923
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:43:54.610095   28750 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220728153807-12923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:43:54.610152   28750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0728 15:43:54.618258   28750 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:43:54.618312   28750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:43:54.625914   28750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0728 15:43:54.638312   28750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:43:54.650390   28750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0728 15:43:54.662650   28750 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:43:54.666258   28750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:43:54.675591   28750 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923 for IP: 192.168.76.2
	I0728 15:43:54.675702   28750 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:43:54.675752   28750 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:43:54.675828   28750 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/client.key
	I0728 15:43:54.675888   28750 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.key.31bdca25
	I0728 15:43:54.675949   28750 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.key
	I0728 15:43:54.676161   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:43:54.676201   28750 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:43:54.676214   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:43:54.676249   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:43:54.676282   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:43:54.676311   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:43:54.676370   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:43:54.676906   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:43:54.693525   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0728 15:43:54.710007   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:43:54.727109   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 15:43:54.743956   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:43:54.760573   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:43:54.777182   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:43:54.793800   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:43:54.810385   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:43:54.826768   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:43:54.843784   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:43:54.860371   28750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:43:54.873089   28750 ssh_runner.go:195] Run: openssl version
	I0728 15:43:54.878350   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:43:54.886133   28750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:43:54.889944   28750 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:43:54.889982   28750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:43:54.896504   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:43:54.903918   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:43:54.911623   28750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:43:54.915545   28750 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:43:54.915585   28750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:43:54.920977   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:43:54.928142   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:43:54.935893   28750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:43:54.939932   28750 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:43:54.939977   28750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:43:54.945076   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:43:54.952023   28750 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220728153807-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:43:54.952124   28750 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:43:54.982413   28750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:43:54.990129   28750 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 15:43:54.990147   28750 kubeadm.go:626] restartCluster start
	I0728 15:43:54.990193   28750 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 15:43:54.997084   28750 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:54.997139   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:55.061683   28750 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220728153807-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:43:55.061868   28750 kubeconfig.go:127] "old-k8s-version-20220728153807-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 15:43:55.062205   28750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:43:55.063638   28750 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 15:43:55.071259   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.071320   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.079503   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:55.397737   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:43:57.399531   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:43:55.280076   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.280184   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.290411   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:55.481690   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.481806   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.492191   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:55.681640   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.681852   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.693077   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:55.881629   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.881805   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.893813   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.081620   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.081769   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.092929   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.281611   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.281821   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.292761   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.479869   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.480047   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.490772   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.679673   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.679846   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.690437   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.881685   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.881791   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.892358   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.079845   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.079982   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.090531   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.280055   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.280190   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.291095   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.480150   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.480244   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.492691   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.681615   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.681760   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.693150   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.881328   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.881469   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.892688   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:58.081706   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:58.081861   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:58.093332   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:58.093342   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:58.093387   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:58.101659   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:58.101671   28750 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 15:43:58.101676   28750 kubeadm.go:1092] stopping kube-system containers ...
	I0728 15:43:58.101734   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:43:58.130995   28750 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 15:43:58.141397   28750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:43:58.149507   28750 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5751 Jul 28 22:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5787 Jul 28 22:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Jul 28 22:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Jul 28 22:40 /etc/kubernetes/scheduler.conf
	
	I0728 15:43:58.149568   28750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 15:43:58.157415   28750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 15:43:58.165088   28750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 15:43:58.172300   28750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 15:43:58.179816   28750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:43:58.187386   28750 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 15:43:58.187397   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:58.238316   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:59.009658   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:59.230098   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:59.286178   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:59.342104   28750 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:43:59.342164   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:43:59.852670   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:43:59.897387   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:02.399723   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:00.352781   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:00.850650   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:01.352768   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:01.850866   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:02.351446   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:02.850606   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:03.351150   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:03.851365   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:04.352535   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:04.852723   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:04.896817   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:06.897354   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:05.350807   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:05.852624   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:06.352589   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:06.851125   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:07.350565   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:07.852643   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:08.350474   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:08.850445   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:09.352534   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:09.850933   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:08.899065   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:11.398260   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:13.398433   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:10.352606   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:10.852619   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:11.350440   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:11.852134   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:12.352473   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:12.851013   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:13.352270   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:13.850370   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:14.350630   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:14.851959   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:15.896934   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:17.897174   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:15.352566   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:15.851616   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:16.350762   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:16.850420   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:17.350313   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:17.852472   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:18.350337   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:18.851370   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:19.350807   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:19.851563   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:19.897590   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:21.897825   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:20.351203   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:20.851730   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:21.350468   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:21.851009   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:22.350371   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:22.850766   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:23.351160   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:23.851721   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:24.351235   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:24.850785   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:24.396108   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:26.398371   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:25.351192   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:25.850201   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:26.350640   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:26.850236   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:27.350168   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:27.850786   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:28.351502   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:28.851514   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:29.350143   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:29.851249   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:28.898205   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:31.394334   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:33.396151   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:30.350104   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:30.850231   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:31.352251   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:31.850849   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:32.350184   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:32.850157   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:33.351061   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:33.850197   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:34.351704   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:34.850967   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:35.396237   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:37.398008   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:35.350170   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:35.852079   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:36.350361   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:36.849970   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:37.352028   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:37.852028   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:38.352103   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:38.850752   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:39.349925   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:39.850497   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:39.897415   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:42.396857   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:40.350260   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:40.852112   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:41.350628   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:41.850335   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:42.350937   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:42.850588   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:43.350213   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:43.851905   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:44.350537   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:44.851886   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:44.896733   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:47.395498   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:45.351362   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:45.850422   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:46.350013   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:46.851847   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:47.350287   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:47.851880   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:48.349946   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:48.850339   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:49.350494   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:49.851141   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:49.397565   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:51.896910   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:50.350171   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:50.849782   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:51.350363   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:51.850156   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:52.351696   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:52.851835   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:53.349667   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:53.851882   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:54.351848   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:54.851044   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:54.397632   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:56.397780   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:58.398020   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:55.351691   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:55.851300   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:56.351196   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:56.851744   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:57.351804   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:57.850801   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:58.350639   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:58.851158   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:59.349783   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:44:59.382837   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.382851   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:44:59.382917   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:44:59.412464   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.412476   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:44:59.412541   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:44:59.442864   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.442878   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:44:59.442939   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:44:59.474280   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.474292   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:44:59.474350   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:44:59.504175   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.504187   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:44:59.504249   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:44:59.533670   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.533684   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:44:59.533737   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:44:59.565362   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.565374   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:44:59.565431   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:44:59.595139   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.595151   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:44:59.595159   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:44:59.595166   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:44:59.609196   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:44:59.609210   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:00.897095   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:02.897516   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:01.663458   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054270661s)
	I0728 15:45:01.663570   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:01.663577   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:01.703232   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:01.703247   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:01.715560   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:01.715573   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:01.767426   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:04.268324   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:04.349908   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:04.380997   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.381016   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:04.381076   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:04.411821   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.411834   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:04.411892   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:04.441534   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.441546   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:04.441601   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:04.472385   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.472397   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:04.472486   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:04.501753   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.501766   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:04.501827   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:04.536867   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.536880   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:04.536936   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:04.567861   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.567875   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:04.567930   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:04.597628   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.597640   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:04.597647   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:04.597657   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:05.395907   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:07.896645   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:06.654101   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056467581s)
	I0728 15:45:06.654210   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:06.654217   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:06.694756   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:06.694770   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:06.707257   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:06.707270   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:06.761874   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:06.761884   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:06.761891   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:09.276908   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:09.351563   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:09.386142   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.386155   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:09.386219   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:09.418466   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.418478   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:09.418538   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:09.448308   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.448320   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:09.448380   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:09.479593   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.479607   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:09.479679   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:09.508030   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.508043   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:09.508099   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:09.537779   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.537792   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:09.537846   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:09.566993   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.567006   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:09.567065   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:09.596654   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.596672   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:09.596682   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:09.596738   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:09.649892   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:09.649903   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:09.649919   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:09.664184   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:09.664200   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:09.898006   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:12.395262   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:11.716355   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052177269s)
	I0728 15:45:11.716505   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:11.716513   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:11.755880   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:11.755897   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:14.268633   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:14.349684   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:14.380092   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.380128   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:14.380189   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:14.410724   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.410736   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:14.410797   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:14.439371   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.439384   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:14.439439   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:14.469393   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.469406   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:14.469468   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:14.498223   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.498241   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:14.498310   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:14.527916   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.527928   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:14.527993   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:14.557360   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.557378   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:14.557437   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:14.586000   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.586014   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:14.586021   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:14.586027   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:14.625146   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:14.625158   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:14.637616   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:14.637630   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:14.690046   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:14.690066   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:14.690072   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:14.703985   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:14.703997   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:14.894377   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:17.394590   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:16.758147   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054173236s)
	I0728 15:45:19.260560   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:19.351400   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:19.382794   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.382806   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:19.382867   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:19.412998   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.413010   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:19.413076   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:19.442557   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.442571   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:19.442639   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:19.475183   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.475196   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:19.475261   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:19.505391   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.505404   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:19.505469   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:19.536777   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.536793   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:19.536848   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:19.570024   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.570037   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:19.570094   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:19.599292   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.599304   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:19.599311   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:19.599318   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:19.639705   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:19.639722   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:19.651159   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:19.651172   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:19.703460   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:19.703471   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:19.703478   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:19.717843   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:19.717856   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:19.394793   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:21.397697   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:21.769960   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05212591s)
	I0728 15:45:24.270526   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:24.351276   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:24.382811   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.382824   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:24.382886   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:24.414444   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.414457   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:24.414517   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:24.443832   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.443845   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:24.443908   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:24.474162   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.474175   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:24.474237   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:24.503347   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.503359   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:24.503421   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:24.531984   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.531996   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:24.532053   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:24.562043   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.562057   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:24.562112   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:24.591508   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.591520   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:24.591528   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:24.591535   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:24.631583   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:24.631595   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:24.643477   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:24.643492   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:24.697351   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:24.697362   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:24.697368   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:24.711821   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:24.711834   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:23.897383   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:25.897565   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:28.395376   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:26.770905   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059093267s)
	I0728 15:45:29.271547   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:29.349224   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:29.380066   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.380080   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:29.380151   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:29.409249   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.409261   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:29.409319   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:29.437151   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.437169   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:29.437240   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:29.467091   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.467103   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:29.467161   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:29.497532   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.497549   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:29.497615   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:29.526724   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.526737   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:29.526795   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:29.555433   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.555447   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:29.555505   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:29.584958   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.584972   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:29.584981   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:29.584988   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:29.624109   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:29.624122   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:29.635456   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:29.635476   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:29.687908   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:29.687924   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:29.687931   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:29.702012   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:29.702024   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:30.396084   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:32.893828   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:31.757527   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055526095s)
	I0728 15:45:34.258583   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:34.349159   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:34.379696   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.379712   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:34.379777   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:34.409678   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.409691   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:34.409750   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:34.448652   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.448666   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:34.448783   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:34.481247   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.481260   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:34.481331   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:34.515888   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.515900   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:34.515957   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:34.546279   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.546293   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:34.546361   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:34.578942   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.578959   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:34.579027   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:34.610475   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.610486   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:34.610493   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:34.610500   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:34.657901   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:34.657920   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:34.671775   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:34.671798   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:34.725845   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:34.725862   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:34.725869   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:34.743490   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:34.743511   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:33.889657   28466 pod_ready.go:81] duration metric: took 4m0.005353913s waiting for pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace to be "Ready" ...
	E0728 15:45:33.889681   28466 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0728 15:45:33.889764   28466 pod_ready.go:38] duration metric: took 4m15.55855467s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:45:33.889804   28466 kubeadm.go:630] restartCluster took 4m24.926328202s
	W0728 15:45:33.889929   28466 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0728 15:45:33.889957   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0728 15:45:36.339867   28466 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.449910034s)
	I0728 15:45:36.339927   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:45:36.349324   28466 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:45:36.356429   28466 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:45:36.356476   28466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:45:36.363628   28466 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:45:36.363652   28466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:45:36.642620   28466 out.go:204]   - Generating certificates and keys ...
	I0728 15:45:37.767400   28466 out.go:204]   - Booting up control plane ...
	I0728 15:45:36.796303   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05281482s)
	I0728 15:45:39.297144   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:39.349206   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:39.384998   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.385012   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:39.385074   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:39.415143   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.415155   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:39.415212   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:39.455721   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.455742   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:39.455813   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:39.486528   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.486545   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:39.486610   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:39.514977   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.514990   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:39.515048   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:39.550354   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.550367   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:39.550435   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:39.583427   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.583445   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:39.583507   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:39.613948   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.613963   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:39.613970   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:39.613976   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:41.665141   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051188026s)
	I0728 15:45:41.665254   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:41.665262   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:41.703690   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:41.703705   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:41.715446   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:41.715461   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:41.769895   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:41.769906   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:41.769913   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:44.283371   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:44.350279   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:44.381088   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.381106   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:44.381177   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:44.410783   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.410796   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:44.410859   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:44.439499   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.439511   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:44.439565   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:44.468617   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.468631   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:44.468687   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:44.502836   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.502850   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:44.502906   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:44.531631   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.531645   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:44.531710   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:44.562770   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.562782   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:44.562843   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:44.590589   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.590605   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:44.590612   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:44.590619   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:44.630687   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:44.630701   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:44.643944   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:44.643958   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:44.697537   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:44.697552   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:44.697560   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:44.711695   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:44.711708   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:44.826788   28466 out.go:204]   - Configuring RBAC rules ...
	I0728 15:45:45.232252   28466 cni.go:95] Creating CNI manager for ""
	I0728 15:45:45.232266   28466 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:45:45.232286   28466 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 15:45:45.232379   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:45.232384   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551 minikube.k8s.io/name=no-preload-20220728153949-12923 minikube.k8s.io/updated_at=2022_07_28T15_45_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:45.244124   28466 ops.go:34] apiserver oom_adj: -16
	I0728 15:45:45.358591   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:45.913296   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:46.413506   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:46.912821   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:47.413547   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:47.913026   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:48.413424   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:46.766195   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054508784s)
	I0728 15:45:49.266834   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:49.350965   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:49.381946   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.381958   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:49.382017   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:49.411642   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.411655   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:49.411712   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:49.443920   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.443931   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:49.443989   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:49.489604   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.489617   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:49.489677   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:49.521878   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.521891   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:49.521946   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:49.550505   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.550518   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:49.550579   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:49.578158   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.578171   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:49.578228   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:49.606569   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.606582   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:49.606589   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:49.606596   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:49.647420   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:49.647434   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:49.659418   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:49.659430   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:49.712728   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:49.712739   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:49.712748   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:49.726477   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:49.726490   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:48.914841   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:49.412700   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:49.914899   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:50.413335   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:50.912856   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:51.412923   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:51.912900   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:52.413219   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:52.912789   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:53.412866   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:51.782399   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055932042s)
	I0728 15:45:54.282734   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:54.350836   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:54.380911   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.380923   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:54.380988   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:54.409653   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.409665   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:54.409728   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:54.437934   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.437948   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:54.438009   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:54.469669   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.469682   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:54.469762   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:54.497866   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.497878   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:54.497939   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:54.527154   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.527166   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:54.527225   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:54.555859   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.555872   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:54.555929   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:54.585491   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.585508   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:54.585515   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:54.585527   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:54.638036   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:54.638054   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:54.638060   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:54.651690   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:54.651703   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:53.913846   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:54.412622   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:54.914082   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:55.412652   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:55.914738   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:56.413346   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:56.912748   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:57.413739   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:57.914229   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:57.989686   28466 kubeadm.go:1045] duration metric: took 12.757584516s to wait for elevateKubeSystemPrivileges.
	I0728 15:45:57.989703   28466 kubeadm.go:397] StartCluster complete in 4m49.064424466s
	I0728 15:45:57.989718   28466 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:45:57.989792   28466 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:45:57.990324   28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:45:58.526817   28466 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220728153949-12923" rescaled to 1
	I0728 15:45:58.526854   28466 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 15:45:58.526861   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 15:45:58.526878   28466 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0728 15:45:58.527016   28466 config.go:178] Loaded profile config "no-preload-20220728153949-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:45:58.548502   28466 out.go:177] * Verifying Kubernetes components...
	I0728 15:45:58.548569   28466 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220728153949-12923"
	I0728 15:45:58.589985   28466 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220728153949-12923"
	I0728 15:45:58.548566   28466 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220728153949-12923"
	I0728 15:45:58.590013   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:45:58.590027   28466 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220728153949-12923"
	W0728 15:45:58.590040   28466 addons.go:162] addon storage-provisioner should already be in state true
	I0728 15:45:58.548579   28466 addons.go:65] Setting dashboard=true in profile "no-preload-20220728153949-12923"
	I0728 15:45:58.590071   28466 addons.go:153] Setting addon dashboard=true in "no-preload-20220728153949-12923"
	W0728 15:45:58.590082   28466 addons.go:162] addon dashboard should already be in state true
	I0728 15:45:58.590082   28466 host.go:66] Checking if "no-preload-20220728153949-12923" exists ...
	I0728 15:45:58.548577   28466 addons.go:65] Setting metrics-server=true in profile "no-preload-20220728153949-12923"
	I0728 15:45:58.590115   28466 addons.go:153] Setting addon metrics-server=true in "no-preload-20220728153949-12923"
	I0728 15:45:58.590118   28466 host.go:66] Checking if "no-preload-20220728153949-12923" exists ...
	I0728 15:45:58.580026   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W0728 15:45:58.590126   28466 addons.go:162] addon metrics-server should already be in state true
	I0728 15:45:58.590185   28466 host.go:66] Checking if "no-preload-20220728153949-12923" exists ...
	I0728 15:45:58.590401   28466 cli_runner.go:164] Run: docker container inspect no-preload-20220728153949-12923 --format={{.State.Status}}
	I0728 15:45:58.590546   28466 cli_runner.go:164] Run: docker container inspect no-preload-20220728153949-12923 --format={{.State.Status}}
	I0728 15:45:58.591079   28466 cli_runner.go:164] Run: docker container inspect no-preload-20220728153949-12923 --format={{.State.Status}}
	I0728 15:45:58.594850   28466 cli_runner.go:164] Run: docker container inspect no-preload-20220728153949-12923 --format={{.State.Status}}
	I0728 15:45:58.695649   28466 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0728 15:45:58.754046   28466 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:45:58.790960   28466 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0728 15:45:58.812176   28466 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0728 15:45:58.818778   28466 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220728153949-12923"
	I0728 15:45:56.704258   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052577476s)
	I0728 15:45:56.704368   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:56.704375   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:56.743901   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:56.743916   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:59.255613   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:59.348618   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:59.378663   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.378676   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:59.378733   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:59.407038   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.407050   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:59.407106   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:59.450158   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.450182   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:59.450263   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:59.481564   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.481576   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:59.481635   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:59.509158   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.509171   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:59.509229   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:59.547552   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.547570   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:59.547643   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:59.578542   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.578554   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:59.578613   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:59.606863   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.606876   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:59.606883   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:59.606892   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:59.649194   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:59.649222   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:59.663803   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:59.663819   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:59.714772   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:59.714789   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:59.714798   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:59.734190   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:59.734229   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:58.849081   28466 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0728 15:45:58.907117   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	W0728 15:45:58.869947   28466 addons.go:162] addon default-storageclass should already be in state true
	I0728 15:45:58.907160   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0728 15:45:58.907170   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0728 15:45:58.870036   28466 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:45:58.907185   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 15:45:58.907197   28466 host.go:66] Checking if "no-preload-20220728153949-12923" exists ...
	I0728 15:45:58.907197   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220728153949-12923
	I0728 15:45:58.907247   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220728153949-12923
	I0728 15:45:58.907251   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220728153949-12923
	I0728 15:45:58.912038   28466 cli_runner.go:164] Run: docker container inspect no-preload-20220728153949-12923 --format={{.State.Status}}
	I0728 15:45:58.994544   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58932 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/no-preload-20220728153949-12923/id_rsa Username:docker}
	I0728 15:45:58.997290   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58932 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/no-preload-20220728153949-12923/id_rsa Username:docker}
	I0728 15:45:58.997437   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58932 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/no-preload-20220728153949-12923/id_rsa Username:docker}
	I0728 15:45:58.999592   28466 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 15:45:58.999603   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 15:45:58.999748   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220728153949-12923
	I0728 15:45:59.080553   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58932 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/no-preload-20220728153949-12923/id_rsa Username:docker}
	I0728 15:45:59.138372   28466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:45:59.143860   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0728 15:45:59.143874   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0728 15:45:59.230878   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0728 15:45:59.230896   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0728 15:45:59.247435   28466 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0728 15:45:59.247449   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0728 15:45:59.250912   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0728 15:45:59.250929   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0728 15:45:59.329223   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0728 15:45:59.329241   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0728 15:45:59.331623   28466 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0728 15:45:59.331638   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0728 15:45:59.347749   28466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 15:45:59.429934   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0728 15:45:59.429948   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0728 15:45:59.431847   28466 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 15:45:59.431861   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0728 15:45:59.450115   28466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 15:45:59.459293   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0728 15:45:59.459312   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0728 15:45:59.545816   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0728 15:45:59.545840   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0728 15:45:59.655127   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0728 15:45:59.655145   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0728 15:45:59.740993   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 15:45:59.741009   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0728 15:45:59.822122   28466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 15:45:59.832268   28466 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.242127417s)
	I0728 15:45:59.832287   28466 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.242274949s)
	I0728 15:45:59.832293   28466 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0728 15:45:59.832396   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220728153949-12923
	I0728 15:45:59.900849   28466 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220728153949-12923" to be "Ready" ...
	I0728 15:45:59.917729   28466 node_ready.go:49] node "no-preload-20220728153949-12923" has status "Ready":"True"
	I0728 15:45:59.917739   28466 node_ready.go:38] duration metric: took 16.86682ms waiting for node "no-preload-20220728153949-12923" to be "Ready" ...
	I0728 15:45:59.917744   28466 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:45:59.927582   28466 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9vjb2" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:00.224380   28466 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220728153949-12923"
	I0728 15:46:00.442341   28466 pod_ready.go:92] pod "coredns-6d4b75cb6d-9vjb2" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:00.442370   28466 pod_ready.go:81] duration metric: took 514.770942ms waiting for pod "coredns-6d4b75cb6d-9vjb2" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:00.442381   28466 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-kv2dp" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:00.470970   28466 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0728 15:46:00.507076   28466 addons.go:414] enableAddons completed in 1.980228842s
	I0728 15:46:02.457821   28466 pod_ready.go:102] pod "coredns-6d4b75cb6d-kv2dp" in "kube-system" namespace has status "Ready":"False"
	I0728 15:46:01.794612   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060404071s)
	I0728 15:46:04.294986   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:04.348576   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:04.378484   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.378498   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:04.378561   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:04.406624   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.406636   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:04.406692   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:04.445898   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.445930   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:04.445992   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:04.489972   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.489989   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:04.490075   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:04.530482   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.530498   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:04.530561   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:04.563512   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.563527   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:04.563586   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:04.597809   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.597825   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:04.597888   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:04.635527   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.635544   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:04.635553   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:04.635560   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:04.648400   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:04.648417   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:04.714199   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:04.714221   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:04.714234   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:04.731052   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:04.731068   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:04.459041   28466 pod_ready.go:92] pod "coredns-6d4b75cb6d-kv2dp" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:04.459057   28466 pod_ready.go:81] duration metric: took 4.01673536s waiting for pod "coredns-6d4b75cb6d-kv2dp" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.459065   28466 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.464287   28466 pod_ready.go:92] pod "etcd-no-preload-20220728153949-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:04.464297   28466 pod_ready.go:81] duration metric: took 5.228547ms waiting for pod "etcd-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.464305   28466 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.469967   28466 pod_ready.go:92] pod "kube-apiserver-no-preload-20220728153949-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:04.469978   28466 pod_ready.go:81] duration metric: took 5.669404ms waiting for pod "kube-apiserver-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.469985   28466 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.475157   28466 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220728153949-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:04.475166   28466 pod_ready.go:81] duration metric: took 5.176625ms waiting for pod "kube-controller-manager-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.475176   28466 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wnfz5" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.480189   28466 pod_ready.go:92] pod "kube-proxy-wnfz5" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:04.480200   28466 pod_ready.go:81] duration metric: took 5.019746ms waiting for pod "kube-proxy-wnfz5" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.480209   28466 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.854654   28466 pod_ready.go:92] pod "kube-scheduler-no-preload-20220728153949-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:04.854669   28466 pod_ready.go:81] duration metric: took 374.456191ms waiting for pod "kube-scheduler-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.854674   28466 pod_ready.go:38] duration metric: took 4.937005546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:46:04.854690   28466 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:46:04.854739   28466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:04.914822   28466 api_server.go:71] duration metric: took 6.388056042s to wait for apiserver process to appear ...
	I0728 15:46:04.914839   28466 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:46:04.914846   28466 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58936/healthz ...
	I0728 15:46:04.925933   28466 api_server.go:266] https://127.0.0.1:58936/healthz returned 200:
	ok
	I0728 15:46:04.927291   28466 api_server.go:140] control plane version: v1.24.3
	I0728 15:46:04.927300   28466 api_server.go:130] duration metric: took 12.457178ms to wait for apiserver health ...
	I0728 15:46:04.927305   28466 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:46:05.059616   28466 system_pods.go:59] 9 kube-system pods found
	I0728 15:46:05.059631   28466 system_pods.go:61] "coredns-6d4b75cb6d-9vjb2" [e7ff5e7b-3b27-4312-a39d-a807cc162175] Running
	I0728 15:46:05.059635   28466 system_pods.go:61] "coredns-6d4b75cb6d-kv2dp" [f57a996a-e1a8-4e75-a619-5671e5398a85] Running
	I0728 15:46:05.059639   28466 system_pods.go:61] "etcd-no-preload-20220728153949-12923" [1e755c7d-94af-466b-b973-367fe022b2ec] Running
	I0728 15:46:05.059644   28466 system_pods.go:61] "kube-apiserver-no-preload-20220728153949-12923" [f45ada08-8ab3-4173-af85-c6c94912703a] Running
	I0728 15:46:05.059652   28466 system_pods.go:61] "kube-controller-manager-no-preload-20220728153949-12923" [14416218-b978-469c-96bb-e5ef5165ea3e] Running
	I0728 15:46:05.059658   28466 system_pods.go:61] "kube-proxy-wnfz5" [4bd8afa6-e125-44b3-b396-bcec5dc95ab3] Running
	I0728 15:46:05.059664   28466 system_pods.go:61] "kube-scheduler-no-preload-20220728153949-12923" [36649128-eaef-4cb3-93e1-a52797fdea9c] Running
	I0728 15:46:05.059670   28466 system_pods.go:61] "metrics-server-5c6f97fb75-gkqvh" [44470584-40a9-4bb2-8cff-f06ef3e04c5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:46:05.059676   28466 system_pods.go:61] "storage-provisioner" [73a9ba2c-55f8-4a8f-97d0-40aee2a41f7c] Running
	I0728 15:46:05.059680   28466 system_pods.go:74] duration metric: took 132.374042ms to wait for pod list to return data ...
	I0728 15:46:05.059685   28466 default_sa.go:34] waiting for default service account to be created ...
	I0728 15:46:05.254711   28466 default_sa.go:45] found service account: "default"
	I0728 15:46:05.254723   28466 default_sa.go:55] duration metric: took 195.036763ms for default service account to be created ...
	I0728 15:46:05.254728   28466 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 15:46:05.457897   28466 system_pods.go:86] 9 kube-system pods found
	I0728 15:46:05.457911   28466 system_pods.go:89] "coredns-6d4b75cb6d-9vjb2" [e7ff5e7b-3b27-4312-a39d-a807cc162175] Running
	I0728 15:46:05.457917   28466 system_pods.go:89] "coredns-6d4b75cb6d-kv2dp" [f57a996a-e1a8-4e75-a619-5671e5398a85] Running
	I0728 15:46:05.457923   28466 system_pods.go:89] "etcd-no-preload-20220728153949-12923" [1e755c7d-94af-466b-b973-367fe022b2ec] Running
	I0728 15:46:05.457926   28466 system_pods.go:89] "kube-apiserver-no-preload-20220728153949-12923" [f45ada08-8ab3-4173-af85-c6c94912703a] Running
	I0728 15:46:05.457930   28466 system_pods.go:89] "kube-controller-manager-no-preload-20220728153949-12923" [14416218-b978-469c-96bb-e5ef5165ea3e] Running
	I0728 15:46:05.457934   28466 system_pods.go:89] "kube-proxy-wnfz5" [4bd8afa6-e125-44b3-b396-bcec5dc95ab3] Running
	I0728 15:46:05.457940   28466 system_pods.go:89] "kube-scheduler-no-preload-20220728153949-12923" [36649128-eaef-4cb3-93e1-a52797fdea9c] Running
	I0728 15:46:05.457947   28466 system_pods.go:89] "metrics-server-5c6f97fb75-gkqvh" [44470584-40a9-4bb2-8cff-f06ef3e04c5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:46:05.457952   28466 system_pods.go:89] "storage-provisioner" [73a9ba2c-55f8-4a8f-97d0-40aee2a41f7c] Running
	I0728 15:46:05.457957   28466 system_pods.go:126] duration metric: took 203.229084ms to wait for k8s-apps to be running ...
	I0728 15:46:05.457961   28466 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 15:46:05.458012   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:46:05.468105   28466 system_svc.go:56] duration metric: took 10.139358ms WaitForService to wait for kubelet.
	I0728 15:46:05.468120   28466 kubeadm.go:572] duration metric: took 6.941365778s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0728 15:46:05.468140   28466 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:46:05.655963   28466 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:46:05.655975   28466 node_conditions.go:123] node cpu capacity is 6
	I0728 15:46:05.655988   28466 node_conditions.go:105] duration metric: took 187.840999ms to run NodePressure ...
	I0728 15:46:05.655998   28466 start.go:216] waiting for startup goroutines ...
	I0728 15:46:05.687111   28466 start.go:506] kubectl: 1.24.2, cluster: 1.24.3 (minor skew: 0)
	I0728 15:46:05.711082   28466 out.go:177] * Done! kubectl is now configured to use "no-preload-20220728153949-12923" cluster and "default" namespace by default
	I0728 15:46:06.793258   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.062211512s)
	I0728 15:46:06.793371   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:06.793474   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:09.339612   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:09.848472   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:09.877399   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.877411   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:09.877472   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:09.906396   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.906414   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:09.906480   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:09.936854   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.936869   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:09.936928   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:09.966233   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.966249   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:09.966315   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:09.996992   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.997005   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:09.997065   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:10.033579   28750 logs.go:274] 0 containers: []
	W0728 15:46:10.033593   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:10.033650   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:10.069419   28750 logs.go:274] 0 containers: []
	W0728 15:46:10.069433   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:10.069498   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:10.099084   28750 logs.go:274] 0 containers: []
	W0728 15:46:10.099097   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:10.099104   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:10.099112   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:10.112767   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:10.112787   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:10.173268   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:10.173288   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:10.173301   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:10.188909   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:10.188923   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:12.242044   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053144227s)
	I0728 15:46:12.242152   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:12.242158   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:14.784324   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:14.850444   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:14.882027   28750 logs.go:274] 0 containers: []
	W0728 15:46:14.882040   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:14.882097   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:14.912290   28750 logs.go:274] 0 containers: []
	W0728 15:46:14.912303   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:14.912361   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:14.946389   28750 logs.go:274] 0 containers: []
	W0728 15:46:14.946410   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:14.946488   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:14.978870   28750 logs.go:274] 0 containers: []
	W0728 15:46:14.978883   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:14.978943   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:15.008965   28750 logs.go:274] 0 containers: []
	W0728 15:46:15.008978   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:15.009036   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:15.037778   28750 logs.go:274] 0 containers: []
	W0728 15:46:15.037792   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:15.037852   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:15.066142   28750 logs.go:274] 0 containers: []
	W0728 15:46:15.066154   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:15.066212   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:15.097151   28750 logs.go:274] 0 containers: []
	W0728 15:46:15.097164   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:15.097172   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:15.097179   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:15.139648   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:15.168589   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:15.186371   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:15.186386   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:15.242465   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:15.242479   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:15.242491   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:15.256100   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:15.256112   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:17.308800   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052710598s)
	I0728 15:46:19.811135   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:19.848655   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:19.879363   28750 logs.go:274] 0 containers: []
	W0728 15:46:19.879375   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:19.879433   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:19.909343   28750 logs.go:274] 0 containers: []
	W0728 15:46:19.909355   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:19.909414   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:19.938912   28750 logs.go:274] 0 containers: []
	W0728 15:46:19.938925   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:19.938985   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:19.975343   28750 logs.go:274] 0 containers: []
	W0728 15:46:19.975357   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:19.975425   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:20.008264   28750 logs.go:274] 0 containers: []
	W0728 15:46:20.008275   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:20.008331   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:20.038658   28750 logs.go:274] 0 containers: []
	W0728 15:46:20.038670   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:20.038723   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:20.070456   28750 logs.go:274] 0 containers: []
	W0728 15:46:20.070470   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:20.070534   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:20.102016   28750 logs.go:274] 0 containers: []
	W0728 15:46:20.102029   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:20.102037   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:20.102046   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:20.114591   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:20.114610   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:20.174188   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:20.174201   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:20.174208   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:20.189645   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:20.189663   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:22.250872   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061232319s)
	I0728 15:46:22.250983   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:22.250991   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:24.791788   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:24.849171   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:24.880694   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.880706   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:24.880760   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:24.908813   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.908826   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:24.908880   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:24.937412   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.937425   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:24.937484   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:24.966808   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.966819   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:24.966880   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:24.996939   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.996952   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:24.997013   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:25.025856   28750 logs.go:274] 0 containers: []
	W0728 15:46:25.025868   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:25.025927   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:25.054899   28750 logs.go:274] 0 containers: []
	W0728 15:46:25.054911   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:25.054970   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:25.083700   28750 logs.go:274] 0 containers: []
	W0728 15:46:25.083712   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:25.083720   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:25.083729   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:25.097410   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:25.097423   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:27.151701   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054299606s)
	I0728 15:46:27.151808   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:27.151815   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:27.192088   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:27.192102   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:27.203829   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:27.203842   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:27.257399   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:29.758384   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:29.848749   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:29.880250   28750 logs.go:274] 0 containers: []
	W0728 15:46:29.880262   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:29.880318   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:29.910202   28750 logs.go:274] 0 containers: []
	W0728 15:46:29.910215   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:29.910271   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:29.940618   28750 logs.go:274] 0 containers: []
	W0728 15:46:29.940632   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:29.940699   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:29.971567   28750 logs.go:274] 0 containers: []
	W0728 15:46:29.971583   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:29.971645   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:30.004734   28750 logs.go:274] 0 containers: []
	W0728 15:46:30.004750   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:30.004814   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:30.036150   28750 logs.go:274] 0 containers: []
	W0728 15:46:30.036164   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:30.036234   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:30.066088   28750 logs.go:274] 0 containers: []
	W0728 15:46:30.066101   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:30.066156   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:30.095216   28750 logs.go:274] 0 containers: []
	W0728 15:46:30.095228   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:30.095235   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:30.095242   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:30.148425   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:30.152196   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:30.152207   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:30.165693   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:30.165704   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:32.216553   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050871151s)
	I0728 15:46:32.216665   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:32.216673   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:32.259143   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:32.259161   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:34.771261   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:34.850228   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:34.881646   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.881658   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:34.881714   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:34.911053   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.911065   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:34.911120   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:34.940187   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.940199   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:34.940257   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:34.968953   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.968965   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:34.969022   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:34.999346   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.999359   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:34.999415   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:35.028920   28750 logs.go:274] 0 containers: []
	W0728 15:46:35.028933   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:35.028991   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:35.058519   28750 logs.go:274] 0 containers: []
	W0728 15:46:35.058531   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:35.058589   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:35.087805   28750 logs.go:274] 0 containers: []
	W0728 15:46:35.087817   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:35.087824   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:35.087831   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:35.127597   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:35.127610   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:35.140602   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:35.151800   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:35.210991   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:35.211004   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:35.211011   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:35.227071   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:35.227085   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:37.280866   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053801834s)
	I0728 15:46:39.781106   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:39.848000   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:39.879394   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.879406   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:39.879461   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:39.909065   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.909077   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:39.909133   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:39.938272   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.938283   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:39.938346   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:39.967027   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.967044   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:39.967102   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:39.996593   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.996605   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:39.996661   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:40.025955   28750 logs.go:274] 0 containers: []
	W0728 15:46:40.025967   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:40.026023   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:40.054606   28750 logs.go:274] 0 containers: []
	W0728 15:46:40.054618   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:40.054677   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:40.083931   28750 logs.go:274] 0 containers: []
	W0728 15:46:40.083944   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:40.083951   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:40.083958   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:40.122714   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:40.122727   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:40.133764   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:40.151970   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:40.205103   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:40.205113   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:40.205125   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:40.218748   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:40.218759   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:42.277646   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058908911s)
	I0728 15:46:44.779464   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:44.849951   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:44.881165   28750 logs.go:274] 0 containers: []
	W0728 15:46:44.881178   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:44.881238   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:44.909841   28750 logs.go:274] 0 containers: []
	W0728 15:46:44.909855   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:44.909917   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:44.941101   28750 logs.go:274] 0 containers: []
	W0728 15:46:44.941114   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:44.941179   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:44.972307   28750 logs.go:274] 0 containers: []
	W0728 15:46:44.972320   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:44.972376   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:45.006437   28750 logs.go:274] 0 containers: []
	W0728 15:46:45.006450   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:45.006508   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:45.036116   28750 logs.go:274] 0 containers: []
	W0728 15:46:45.036128   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:45.036185   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:45.064214   28750 logs.go:274] 0 containers: []
	W0728 15:46:45.064226   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:45.064286   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:45.093400   28750 logs.go:274] 0 containers: []
	W0728 15:46:45.093414   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:45.093420   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:45.093427   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:45.107382   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:45.107395   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:47.162864   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055491977s)
	I0728 15:46:47.162967   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:47.162974   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:47.205000   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:47.205023   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:47.216942   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:47.216956   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:47.269215   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:49.769983   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:49.849883   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:49.880517   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.880530   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:49.880587   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:49.908888   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.908904   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:49.908964   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:49.937900   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.937914   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:49.937975   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:49.966223   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.966236   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:49.966292   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:49.995275   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.995288   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:49.995344   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:50.025324   28750 logs.go:274] 0 containers: []
	W0728 15:46:50.025338   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:50.025396   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:50.054609   28750 logs.go:274] 0 containers: []
	W0728 15:46:50.054621   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:50.054679   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:50.082727   28750 logs.go:274] 0 containers: []
	W0728 15:46:50.082739   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:50.082746   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:50.082753   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:50.134737   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:50.151600   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:50.151609   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:50.166276   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:50.166289   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:52.220560   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054292421s)
	I0728 15:46:52.220667   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:52.220673   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:52.259245   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:52.259258   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:54.773839   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:54.847624   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:54.877686   28750 logs.go:274] 0 containers: []
	W0728 15:46:54.877698   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:54.877752   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:54.908194   28750 logs.go:274] 0 containers: []
	W0728 15:46:54.908206   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:54.908265   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:54.942839   28750 logs.go:274] 0 containers: []
	W0728 15:46:54.942851   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:54.942904   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:54.977060   28750 logs.go:274] 0 containers: []
	W0728 15:46:54.977072   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:54.977129   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:55.008268   28750 logs.go:274] 0 containers: []
	W0728 15:46:55.008285   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:55.008356   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:55.039796   28750 logs.go:274] 0 containers: []
	W0728 15:46:55.039809   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:55.039870   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:55.070921   28750 logs.go:274] 0 containers: []
	W0728 15:46:55.070933   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:55.070992   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:55.102136   28750 logs.go:274] 0 containers: []
	W0728 15:46:55.102153   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:55.102162   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:55.102171   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 22:41:05 UTC, end at Thu 2022-07-28 22:46:56 UTC. --
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.060335584Z" level=info msg="ignoring event" container=0bf3c767b4fc2a66156130c9f406c62858d7824dc583661ff3a1911efe8c9923 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.230670906Z" level=info msg="ignoring event" container=cf67e3eea06a184545f7c1c8c3206331377b609d7b551dd8d6d9a9851c327741 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.301903139Z" level=info msg="ignoring event" container=4aaa285a85f37fbf5d47fef1e255b437bea00acd44601354369df9190c9c45c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.402914277Z" level=info msg="ignoring event" container=854a3657e51840af60e9a740ccf01c07549223096fcdcec9a6fb364079270330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.475584996Z" level=info msg="ignoring event" container=605b2ce0e3877547d7f292baac0fcf2e1263b131390f1ca138c2eb06923cfb59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.541447594Z" level=info msg="ignoring event" container=e91b81d1dc870aedffddc2202127d5756fd4af9381706f082f6d8a62a9f62a12 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.609312065Z" level=info msg="ignoring event" container=2be3aebac2ad81bdd7ee2c337bf62a2850ddff5d61b9ab278be1b5092161396f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.718623266Z" level=info msg="ignoring event" container=589a60210160b5c8442a1abb2a56aedae556d1fd1e7c7b22987f05d12c55749c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.787457711Z" level=info msg="ignoring event" container=d2180fc16c0104c3a6379715a6bb6b7f3ab3f267d71afb32b792110092fb4f3c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.856272084Z" level=info msg="ignoring event" container=d3ae94d13f0c5715761d7eec855ecf13220f9a6b19b8d12e52e114a01eb0d34d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.931766156Z" level=info msg="ignoring event" container=678b4146bfd645109fab5371c1dd06b1d380caab0f42fe999edbb29ed477cf7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:36 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:36.029832255Z" level=info msg="ignoring event" container=bc96ab080e11116579e35688fdf8058bf2e051efc5bbe68e9d39116b74c3d360 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:46:01 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:01.010425567Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:01 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:01.010550628Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:01 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:01.011827455Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:01 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:01.777332851Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 28 22:46:04 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:04.682135949Z" level=info msg="ignoring event" container=117cd20a1edcac6e7c1c895a10c59ef9ac9abf4a9af5474436bcee68c60ef85f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:46:04 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:04.846525290Z" level=info msg="ignoring event" container=08104b1f598e18174657701b5f577f440a536335749d61c9ae494cc44c5c01a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:46:07 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:07.374124592Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 28 22:46:07 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:07.671967241Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 28 22:46:10 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:10.940764587Z" level=info msg="ignoring event" container=1428bf62df75e8d6001ac46cbdac462cbe800dd1022ff16dda709abc52f019be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:46:12 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:12.077812782Z" level=info msg="ignoring event" container=d55bb53547bb29e664bf66093cc96b54f0b3eceebd788da28c50d724356a4ec7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:46:15 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:15.209985902Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:15 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:15.210030631Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:15 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:15.211656948Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	d55bb53547bb2       a90209bb39e3d                                                                                    45 seconds ago       Exited              dashboard-metrics-scraper   1                   a612834ac1b25
	a93fb9f591314       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   50 seconds ago       Running             kubernetes-dashboard        0                   5d2b839c3189f
	c56995f33a521       6e38f40d628db                                                                                    56 seconds ago       Running             storage-provisioner         0                   ef9726c3d6e93
	932389c13a32e       a4ca41631cc7a                                                                                    57 seconds ago       Running             coredns                     0                   149ae2612e69d
	42b621e0915da       2ae1ba6417cbc                                                                                    58 seconds ago       Running             kube-proxy                  0                   1859344ec0c9e
	d8f608d7350f0       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   4144606f15c19
	54f11c6f5e5cc       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   b8468190a3242
	50accddf195e0       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   48931cef22923
	61c5e29a44123       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   a6744ba8b63c2
	
	* 
	* ==> coredns [932389c13a32] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220728153949-12923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220728153949-12923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
	                    minikube.k8s.io/name=no-preload-20220728153949-12923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_28T15_45_45_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 22:45:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220728153949-12923
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 22:46:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Jul 2022 22:46:54 +0000   Thu, 28 Jul 2022 22:45:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Jul 2022 22:46:54 +0000   Thu, 28 Jul 2022 22:45:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Jul 2022 22:46:54 +0000   Thu, 28 Jul 2022 22:45:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Jul 2022 22:46:54 +0000   Thu, 28 Jul 2022 22:46:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220728153949-12923
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                26065199-fa84-4b4b-8bc9-9762d3650182
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-kv2dp                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     60s
	  kube-system                 etcd-no-preload-20220728153949-12923                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-no-preload-20220728153949-12923             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-no-preload-20220728153949-12923    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-wnfz5                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-no-preload-20220728153949-12923             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 metrics-server-5c6f97fb75-gkqvh                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         57s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-b6ggg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-j2qnw                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  79s (x5 over 79s)  kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x5 over 79s)  kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x5 over 79s)  kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasSufficientPID
	  Normal  Starting                 72s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  72s                kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s                kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s                kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasSufficientPID
	  Normal  NodeReady                72s                kubelet          Node no-preload-20220728153949-12923 status is now: NodeReady
	  Normal  RegisteredNode           61s                node-controller  Node no-preload-20220728153949-12923 event: Registered Node no-preload-20220728153949-12923 in Controller
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s                 kubelet          Node no-preload-20220728153949-12923 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                 kubelet          Node no-preload-20220728153949-12923 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [50accddf195e] <==
	* {"level":"info","ts":"2022-07-28T22:45:39.157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-07-28T22:45:39.157Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-07-28T22:45:39.159Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-28T22:45:39.159Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:45:39.159Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:45:39.160Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T22:45:39.160Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220728153949-12923 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T22:45:39.554Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T22:45:39.554Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:45:39.555Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:46:57 up  1:07,  0 users,  load average: 0.25, 0.58, 0.90
	Linux no-preload-20220728153949-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [61c5e29a4412] <==
	* I0728 22:45:42.982937       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0728 22:45:43.236261       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0728 22:45:43.262077       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0728 22:45:43.363728       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0728 22:45:43.367206       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0728 22:45:43.367874       1 controller.go:611] quota admission added evaluator for: endpoints
	I0728 22:45:43.370831       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0728 22:45:44.105709       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0728 22:45:45.030702       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0728 22:45:45.036536       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0728 22:45:45.044994       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0728 22:45:45.129299       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0728 22:45:57.291364       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0728 22:45:57.791467       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0728 22:45:58.344864       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0728 22:46:00.203577       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.108.223.49]
	I0728 22:46:00.436966       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.108.173.66]
	I0728 22:46:00.447794       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.99.1.148]
	W0728 22:46:01.063162       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 22:46:01.063200       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0728 22:46:01.063206       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 22:46:01.063172       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 22:46:01.063291       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0728 22:46:01.064311       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d8f608d7350f] <==
	* I0728 22:45:57.945568       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-kv2dp"
	I0728 22:45:58.035224       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0728 22:45:58.039060       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-9vjb2"
	I0728 22:46:00.054316       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0728 22:46:00.059999       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0728 22:46:00.066379       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0728 22:46:00.130405       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-gkqvh"
	I0728 22:46:00.287582       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0728 22:46:00.330795       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 22:46:00.333076       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0728 22:46:00.335331       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:46:00.337065       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 22:46:00.341631       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 22:46:00.341794       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:46:00.341895       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:46:00.345987       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 22:46:00.346073       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:46:00.346103       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 22:46:00.346121       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:46:00.350841       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:46:00.350904       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 22:46:00.356281       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-b6ggg"
	I0728 22:46:00.363315       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-j2qnw"
	E0728 22:46:54.191769       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0728 22:46:54.195822       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [42b621e0915d] <==
	* I0728 22:45:58.322158       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0728 22:45:58.322216       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0728 22:45:58.322254       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 22:45:58.340362       1 server_others.go:206] "Using iptables Proxier"
	I0728 22:45:58.340399       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 22:45:58.340407       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 22:45:58.340417       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 22:45:58.340436       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:45:58.340801       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:45:58.341165       1 server.go:661] "Version info" version="v1.24.3"
	I0728 22:45:58.341263       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:45:58.342471       1 config.go:317] "Starting service config controller"
	I0728 22:45:58.342503       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 22:45:58.342523       1 config.go:226] "Starting endpoint slice config controller"
	I0728 22:45:58.342566       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 22:45:58.342581       1 config.go:444] "Starting node config controller"
	I0728 22:45:58.342588       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 22:45:58.442681       1 shared_informer.go:262] Caches are synced for node config
	I0728 22:45:58.442743       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 22:45:58.442755       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [54f11c6f5e5c] <==
	* W0728 22:45:42.053936       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0728 22:45:42.054238       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0728 22:45:42.054354       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0728 22:45:42.054367       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0728 22:45:42.055385       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0728 22:45:42.055424       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0728 22:45:42.055482       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0728 22:45:42.055502       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0728 22:45:42.055535       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0728 22:45:42.055630       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0728 22:45:42.055691       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0728 22:45:42.055757       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0728 22:45:42.055699       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0728 22:45:42.055770       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0728 22:45:42.877463       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0728 22:45:42.877514       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0728 22:45:42.970100       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0728 22:45:42.970136       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0728 22:45:42.987042       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0728 22:45:42.987081       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0728 22:45:43.010515       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0728 22:45:43.010550       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0728 22:45:43.053332       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0728 22:45:43.053370       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0728 22:45:43.450786       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 22:41:05 UTC, end at Thu 2022-07-28 22:46:58 UTC. --
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.566125    9821 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602405    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpnz4\" (UniqueName: \"kubernetes.io/projected/44470584-40a9-4bb2-8cff-f06ef3e04c5a-kube-api-access-dpnz4\") pod \"metrics-server-5c6f97fb75-gkqvh\" (UID: \"44470584-40a9-4bb2-8cff-f06ef3e04c5a\") " pod="kube-system/metrics-server-5c6f97fb75-gkqvh"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602511    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bd8afa6-e125-44b3-b396-bcec5dc95ab3-xtables-lock\") pod \"kube-proxy-wnfz5\" (UID: \"4bd8afa6-e125-44b3-b396-bcec5dc95ab3\") " pod="kube-system/kube-proxy-wnfz5"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602532    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bq7f\" (UniqueName: \"kubernetes.io/projected/73a9ba2c-55f8-4a8f-97d0-40aee2a41f7c-kube-api-access-9bq7f\") pod \"storage-provisioner\" (UID: \"73a9ba2c-55f8-4a8f-97d0-40aee2a41f7c\") " pod="kube-system/storage-provisioner"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602567    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/44470584-40a9-4bb2-8cff-f06ef3e04c5a-tmp-dir\") pod \"metrics-server-5c6f97fb75-gkqvh\" (UID: \"44470584-40a9-4bb2-8cff-f06ef3e04c5a\") " pod="kube-system/metrics-server-5c6f97fb75-gkqvh"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602618    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f57a996a-e1a8-4e75-a619-5671e5398a85-config-volume\") pod \"coredns-6d4b75cb6d-kv2dp\" (UID: \"f57a996a-e1a8-4e75-a619-5671e5398a85\") " pod="kube-system/coredns-6d4b75cb6d-kv2dp"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602688    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4bd8afa6-e125-44b3-b396-bcec5dc95ab3-kube-proxy\") pod \"kube-proxy-wnfz5\" (UID: \"4bd8afa6-e125-44b3-b396-bcec5dc95ab3\") " pod="kube-system/kube-proxy-wnfz5"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602734    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg8td\" (UniqueName: \"kubernetes.io/projected/4bd8afa6-e125-44b3-b396-bcec5dc95ab3-kube-api-access-hg8td\") pod \"kube-proxy-wnfz5\" (UID: \"4bd8afa6-e125-44b3-b396-bcec5dc95ab3\") " pod="kube-system/kube-proxy-wnfz5"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602771    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dblsc\" (UniqueName: \"kubernetes.io/projected/ab9a025e-16ee-44e3-a3ee-97431afb9113-kube-api-access-dblsc\") pod \"kubernetes-dashboard-5fd5574d9f-j2qnw\" (UID: \"ab9a025e-16ee-44e3-a3ee-97431afb9113\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-j2qnw"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602790    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9m5x\" (UniqueName: \"kubernetes.io/projected/f57a996a-e1a8-4e75-a619-5671e5398a85-kube-api-access-s9m5x\") pod \"coredns-6d4b75cb6d-kv2dp\" (UID: \"f57a996a-e1a8-4e75-a619-5671e5398a85\") " pod="kube-system/coredns-6d4b75cb6d-kv2dp"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602857    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92g2s\" (UniqueName: \"kubernetes.io/projected/362a0692-30b8-440a-9768-4fd3e312c24d-kube-api-access-92g2s\") pod \"dashboard-metrics-scraper-dffd48c4c-b6ggg\" (UID: \"362a0692-30b8-440a-9768-4fd3e312c24d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-b6ggg"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602906    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bd8afa6-e125-44b3-b396-bcec5dc95ab3-lib-modules\") pod \"kube-proxy-wnfz5\" (UID: \"4bd8afa6-e125-44b3-b396-bcec5dc95ab3\") " pod="kube-system/kube-proxy-wnfz5"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602924    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab9a025e-16ee-44e3-a3ee-97431afb9113-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-j2qnw\" (UID: \"ab9a025e-16ee-44e3-a3ee-97431afb9113\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-j2qnw"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602939    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/73a9ba2c-55f8-4a8f-97d0-40aee2a41f7c-tmp\") pod \"storage-provisioner\" (UID: \"73a9ba2c-55f8-4a8f-97d0-40aee2a41f7c\") " pod="kube-system/storage-provisioner"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602977    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/362a0692-30b8-440a-9768-4fd3e312c24d-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-b6ggg\" (UID: \"362a0692-30b8-440a-9768-4fd3e312c24d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-b6ggg"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602999    9821 reconciler.go:157] "Reconciler: start to sync state"
	Jul 28 22:46:56 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:56.762692    9821 request.go:601] Waited for 1.136558002s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 28 22:46:56 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:56.821225    9821 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220728153949-12923\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220728153949-12923"
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.002012    9821 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220728153949-12923\" already exists" pod="kube-system/kube-scheduler-no-preload-20220728153949-12923"
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.177693    9821 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220728153949-12923\" already exists" pod="kube-system/kube-apiserver-no-preload-20220728153949-12923"
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.407750    9821 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220728153949-12923\" already exists" pod="kube-system/etcd-no-preload-20220728153949-12923"
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.783555    9821 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.783653    9821 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.783811    9821 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dpnz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:
[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-gkqvh_kube-system(44470584-40a9-4bb2-8cff-f06ef3e04c5a): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.783878    9821 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-gkqvh" podUID=44470584-40a9-4bb2-8cff-f06ef3e04c5a
	
	* 
	* ==> kubernetes-dashboard [a93fb9f59131] <==
	* 2022/07/28 22:46:06 Using namespace: kubernetes-dashboard
	2022/07/28 22:46:06 Using in-cluster config to connect to apiserver
	2022/07/28 22:46:06 Using secret token for csrf signing
	2022/07/28 22:46:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/28 22:46:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/28 22:46:06 Successful initial request to the apiserver, version: v1.24.3
	2022/07/28 22:46:06 Generating JWE encryption key
	2022/07/28 22:46:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/28 22:46:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/28 22:46:07 Initializing JWE encryption key from synchronized object
	2022/07/28 22:46:07 Creating in-cluster Sidecar client
	2022/07/28 22:46:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/28 22:46:07 Serving insecurely on HTTP port: 9090
	2022/07/28 22:46:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/28 22:46:06 Starting overwatch
	
	* 
	* ==> storage-provisioner [c56995f33a52] <==
	* I0728 22:46:00.833319       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 22:46:00.842602       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 22:46:00.842657       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 22:46:00.849866       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 22:46:00.849995       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220728153949-12923_7d705e6f-182c-4a75-b5f8-00aedb056239!
	I0728 22:46:00.851067       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30f39fb9-9f84-46b1-a264-2ea8bc0e9187", APIVersion:"v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220728153949-12923_7d705e6f-182c-4a75-b5f8-00aedb056239 became leader
	I0728 22:46:00.950760       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220728153949-12923_7d705e6f-182c-4a75-b5f8-00aedb056239!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220728153949-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-gkqvh
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220728153949-12923 describe pod metrics-server-5c6f97fb75-gkqvh
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220728153949-12923 describe pod metrics-server-5c6f97fb75-gkqvh: exit status 1 (264.876539ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-gkqvh" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220728153949-12923 describe pod metrics-server-5c6f97fb75-gkqvh: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-20220728153949-12923
helpers_test.go:235: (dbg) docker inspect no-preload-20220728153949-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f8adb766de93f397568039069c5b097806455f11713d0343f33bb401ef0b502",
	        "Created": "2022-07-28T22:39:51.528852294Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238589,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:41:04.982223707Z",
	            "FinishedAt": "2022-07-28T22:41:02.998662599Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/3f8adb766de93f397568039069c5b097806455f11713d0343f33bb401ef0b502/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f8adb766de93f397568039069c5b097806455f11713d0343f33bb401ef0b502/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f8adb766de93f397568039069c5b097806455f11713d0343f33bb401ef0b502/hosts",
	        "LogPath": "/var/lib/docker/containers/3f8adb766de93f397568039069c5b097806455f11713d0343f33bb401ef0b502/3f8adb766de93f397568039069c5b097806455f11713d0343f33bb401ef0b502-json.log",
	        "Name": "/no-preload-20220728153949-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-20220728153949-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-20220728153949-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/70815aca0066c90e611ae47b7e08eb4cad8229f27f8910d38dcfd7c0ca62b8fe-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/70815aca0066c90e611ae47b7e08eb4cad8229f27f8910d38dcfd7c0ca62b8fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/70815aca0066c90e611ae47b7e08eb4cad8229f27f8910d38dcfd7c0ca62b8fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/70815aca0066c90e611ae47b7e08eb4cad8229f27f8910d38dcfd7c0ca62b8fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-20220728153949-12923",
	                "Source": "/var/lib/docker/volumes/no-preload-20220728153949-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-20220728153949-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-20220728153949-12923",
	                "name.minikube.sigs.k8s.io": "no-preload-20220728153949-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2bad7230a6f13a83826a3bbc7a594991b8d8737e0708569e889a37a79c4c6eef",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58932"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58933"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58934"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58935"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58936"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2bad7230a6f1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-20220728153949-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3f8adb766de9",
	                        "no-preload-20220728153949-12923"
	                    ],
	                    "NetworkID": "790a06655bfe3414e1077459bcf3050a64dcee1b7d41236d506cad966a591457",
	                    "EndpointID": "8f65d24e088b3853630d1b1e0bb6dca05b0f3417d6812f78bf18635869ea87cd",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-20220728153949-12923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p no-preload-20220728153949-12923 logs -n 25: (2.706640674s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p false-20220728152331-12923                     | false-20220728152331-12923              | jenkins | v1.26.0 | 28 Jul 22 15:36 PDT | 28 Jul 22 15:37 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=false                     |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| start   | -p bridge-20220728152330-12923                    | bridge-20220728152330-12923             | jenkins | v1.26.0 | 28 Jul 22 15:36 PDT | 28 Jul 22 15:36 PDT |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                                         |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=bridge                    |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p bridge-20220728152330-12923                    | bridge-20220728152330-12923             | jenkins | v1.26.0 | 28 Jul 22 15:36 PDT | 28 Jul 22 15:36 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p bridge-20220728152330-12923                    | bridge-20220728152330-12923             | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:37 PDT |
	| start   | -p                                                | enable-default-cni-20220728152330-12923 | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:37 PDT |
	|         | enable-default-cni-20220728152330-12923           |                                         |         |         |                     |                     |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --enable-default-cni=true                         |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p false-20220728152331-12923                     | false-20220728152331-12923              | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:37 PDT |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p false-20220728152331-12923                     | false-20220728152331-12923              | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:37 PDT |
	| start   | -p                                                | kubenet-20220728152330-12923            | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:38 PDT |
	|         | kubenet-20220728152330-12923                      |                                         |         |         |                     |                     |
	|         | --memory=2048                                     |                                         |         |         |                     |                     |
	|         | --alsologtostderr                                 |                                         |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                     |                     |
	|         | --network-plugin=kubenet                          |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	| ssh     | -p                                                | enable-default-cni-20220728152330-12923 | jenkins | v1.26.0 | 28 Jul 22 15:37 PDT | 28 Jul 22 15:37 PDT |
	|         | enable-default-cni-20220728152330-12923           |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | enable-default-cni-20220728152330-12923 | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT | 28 Jul 22 15:38 PDT |
	|         | enable-default-cni-20220728152330-12923           |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220728152330-12923            | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT | 28 Jul 22 15:38 PDT |
	|         | kubenet-20220728152330-12923                      |                                         |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                         |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220728152330-12923            | jenkins | v1.26.0 | 28 Jul 22 15:39 PDT | 28 Jul 22 15:39 PDT |
	|         | kubenet-20220728152330-12923                      |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:39 PDT | 28 Jul 22 15:40 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:40 PDT | 28 Jul 22 15:40 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:40 PDT | 28 Jul 22 15:41 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:41 PDT | 28 Jul 22 15:41 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:41 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                         |         |         |                     |                     |
	|         | --driver=docker                                   |                                         |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                         |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:42 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT | 28 Jul 22 15:43 PDT |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT | 28 Jul 22 15:43 PDT |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220728153807-12923    | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                         |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                         |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                         |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                         |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                         |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                         |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                         |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220728153949-12923         | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                         |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                         |         |         |                     |                     |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 15:43:50
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 15:43:50.132817   28750 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:43:50.132989   28750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:43:50.132995   28750 out.go:309] Setting ErrFile to fd 2...
	I0728 15:43:50.133000   28750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:43:50.133108   28750 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:43:50.133582   28750 out.go:303] Setting JSON to false
	I0728 15:43:50.149553   28750 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9272,"bootTime":1659038958,"procs":353,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:43:50.149639   28750 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:43:50.171234   28750 out.go:177] * [old-k8s-version-20220728153807-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:43:50.193226   28750 notify.go:193] Checking for updates...
	I0728 15:43:50.215046   28750 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:43:50.237023   28750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:43:50.257931   28750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:43:50.279132   28750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:43:50.301171   28750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:43:50.323702   28750 config.go:178] Loaded profile config "old-k8s-version-20220728153807-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0728 15:43:50.345915   28750 out.go:177] * Kubernetes 1.24.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.3
	I0728 15:43:50.367017   28750 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:43:50.437569   28750 docker.go:137] docker version: linux-20.10.17
	I0728 15:43:50.437729   28750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:43:50.569692   28750 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:43:50.498204227 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:43:50.591689   28750 out.go:177] * Using the docker driver based on existing profile
	I0728 15:43:50.613510   28750 start.go:284] selected driver: docker
	I0728 15:43:50.613538   28750 start.go:808] validating driver "docker" against &{Name:old-k8s-version-20220728153807-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: M
ultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:43:50.613718   28750 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:43:50.617013   28750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:43:50.747972   28750 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:43:50.676530285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:43:50.748120   28750 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:43:50.748138   28750 cni.go:95] Creating CNI manager for ""
	I0728 15:43:50.748148   28750 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:43:50.748159   28750 start_flags.go:310] config:
	{Name:old-k8s-version-20220728153807-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:43:50.791553   28750 out.go:177] * Starting control plane node old-k8s-version-20220728153807-12923 in cluster old-k8s-version-20220728153807-12923
	I0728 15:43:50.812795   28750 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:43:50.834772   28750 out.go:177] * Pulling base image ...
	I0728 15:43:50.876918   28750 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:43:50.876976   28750 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:43:50.877001   28750 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0728 15:43:50.877022   28750 cache.go:57] Caching tarball of preloaded images
	I0728 15:43:50.877208   28750 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:43:50.877230   28750 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0728 15:43:50.878252   28750 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/config.json ...
	I0728 15:43:50.941312   28750 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:43:50.941328   28750 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:43:50.941340   28750 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:43:50.941397   28750 start.go:370] acquiring machines lock for old-k8s-version-20220728153807-12923: {Name:mke15a14ac0b96e8c97ba263723c52eb5c7e7def Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:43:50.941474   28750 start.go:374] acquired machines lock for "old-k8s-version-20220728153807-12923" in 57.265µs
	I0728 15:43:50.941495   28750 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:43:50.941503   28750 fix.go:55] fixHost starting: 
	I0728 15:43:50.941727   28750 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220728153807-12923 --format={{.State.Status}}
	I0728 15:43:51.004580   28750 fix.go:103] recreateIfNeeded on old-k8s-version-20220728153807-12923: state=Stopped err=<nil>
	W0728 15:43:51.004619   28750 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:43:51.026654   28750 out.go:177] * Restarting existing docker container for "old-k8s-version-20220728153807-12923" ...
	I0728 15:43:50.398263   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:43:52.897470   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:43:51.069483   28750 cli_runner.go:164] Run: docker start old-k8s-version-20220728153807-12923
	I0728 15:43:51.432239   28750 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220728153807-12923 --format={{.State.Status}}
	I0728 15:43:51.497121   28750 kic.go:415] container "old-k8s-version-20220728153807-12923" state is running.
	I0728 15:43:51.497698   28750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220728153807-12923
	I0728 15:43:51.568555   28750 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/config.json ...
	I0728 15:43:51.568955   28750 machine.go:88] provisioning docker machine ...
	I0728 15:43:51.568976   28750 ubuntu.go:169] provisioning hostname "old-k8s-version-20220728153807-12923"
	I0728 15:43:51.569046   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:51.636172   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:51.636370   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:51.636385   28750 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220728153807-12923 && echo "old-k8s-version-20220728153807-12923" | sudo tee /etc/hostname
	I0728 15:43:51.762903   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220728153807-12923
	
	I0728 15:43:51.762993   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:51.828455   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:51.828606   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:51.828621   28750 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220728153807-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220728153807-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220728153807-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:43:51.949269   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:43:51.949293   28750 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:43:51.949317   28750 ubuntu.go:177] setting up certificates
	I0728 15:43:51.949328   28750 provision.go:83] configureAuth start
	I0728 15:43:51.949396   28750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220728153807-12923
	I0728 15:43:52.013262   28750 provision.go:138] copyHostCerts
	I0728 15:43:52.013379   28750 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:43:52.013389   28750 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:43:52.013487   28750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:43:52.013675   28750 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:43:52.013683   28750 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:43:52.013741   28750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:43:52.013881   28750 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:43:52.013887   28750 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:43:52.013945   28750 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:43:52.014068   28750 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220728153807-12923 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220728153807-12923]
	I0728 15:43:52.162837   28750 provision.go:172] copyRemoteCerts
	I0728 15:43:52.162892   28750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:43:52.162936   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:52.226854   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:52.314899   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:43:52.331775   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0728 15:43:52.349209   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0728 15:43:52.366683   28750 provision.go:86] duration metric: configureAuth took 417.345293ms
	I0728 15:43:52.366697   28750 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:43:52.366840   28750 config.go:178] Loaded profile config "old-k8s-version-20220728153807-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0728 15:43:52.366907   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:52.432300   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:52.432458   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:52.432469   28750 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:43:52.556064   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:43:52.556075   28750 ubuntu.go:71] root file system type: overlay
	I0728 15:43:52.556206   28750 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:43:52.556278   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:52.620853   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:52.621084   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:52.621129   28750 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:43:52.751843   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:43:52.751916   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:52.816883   28750 main.go:134] libmachine: Using SSH client type: native
	I0728 15:43:52.817041   28750 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 58969 <nil> <nil>}
	I0728 15:43:52.817055   28750 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:43:52.941836   28750 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:43:52.941853   28750 machine.go:91] provisioned docker machine in 1.372912502s
	I0728 15:43:52.941863   28750 start.go:307] post-start starting for "old-k8s-version-20220728153807-12923" (driver="docker")
	I0728 15:43:52.941870   28750 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:43:52.941934   28750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:43:52.941995   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:53.006600   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:53.094280   28750 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:43:53.100080   28750 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:43:53.100098   28750 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:43:53.100105   28750 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:43:53.100109   28750 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:43:53.100119   28750 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:43:53.100242   28750 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:43:53.100374   28750 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:43:53.100517   28750 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:43:53.109632   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:43:53.126762   28750 start.go:310] post-start completed in 184.891915ms
	I0728 15:43:53.126836   28750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:43:53.126883   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:53.191616   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:53.276705   28750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:43:53.281015   28750 fix.go:57] fixHost completed within 2.33954993s
	I0728 15:43:53.281029   28750 start.go:82] releasing machines lock for "old-k8s-version-20220728153807-12923", held for 2.339584988s
	I0728 15:43:53.281105   28750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220728153807-12923
	I0728 15:43:53.345999   28750 ssh_runner.go:195] Run: systemctl --version
	I0728 15:43:53.346002   28750 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:43:53.346069   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:53.346083   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:53.415502   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:53.416382   28750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58969 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/old-k8s-version-20220728153807-12923/id_rsa Username:docker}
	I0728 15:43:53.693282   28750 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:43:53.703210   28750 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:43:53.703267   28750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:43:53.715068   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:43:53.728140   28750 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:43:53.798778   28750 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:43:53.864441   28750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:43:53.929027   28750 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:43:54.130959   28750 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:43:54.167626   28750 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:43:54.246239   28750 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.17 ...
	I0728 15:43:54.246432   28750 cli_runner.go:164] Run: docker exec -t old-k8s-version-20220728153807-12923 dig +short host.docker.internal
	I0728 15:43:54.362961   28750 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:43:54.363076   28750 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:43:54.367718   28750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:43:54.377807   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:54.476552   28750 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 15:43:54.476614   28750 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:43:54.506826   28750 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0728 15:43:54.506844   28750 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:43:54.506923   28750 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:43:54.537701   28750 docker.go:611] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0728 15:43:54.537724   28750 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:43:54.537804   28750 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:43:54.609845   28750 cni.go:95] Creating CNI manager for ""
	I0728 15:43:54.609857   28750 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:43:54.609873   28750 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:43:54.609888   28750 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220728153807-12923 NodeName:old-k8s-version-20220728153807-12923 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:systemd Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:43:54.610015   28750 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-20220728153807-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220728153807-12923
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:43:54.610095   28750 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-20220728153807-12923 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:43:54.610152   28750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0728 15:43:54.618258   28750 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:43:54.618312   28750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:43:54.625914   28750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0728 15:43:54.638312   28750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:43:54.650390   28750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0728 15:43:54.662650   28750 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:43:54.666258   28750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:43:54.675591   28750 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923 for IP: 192.168.76.2
	I0728 15:43:54.675702   28750 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:43:54.675752   28750 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:43:54.675828   28750 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/client.key
	I0728 15:43:54.675888   28750 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.key.31bdca25
	I0728 15:43:54.675949   28750 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.key
	I0728 15:43:54.676161   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:43:54.676201   28750 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:43:54.676214   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:43:54.676249   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:43:54.676282   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:43:54.676311   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:43:54.676370   28750 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:43:54.676906   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:43:54.693525   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0728 15:43:54.710007   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:43:54.727109   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/old-k8s-version-20220728153807-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 15:43:54.743956   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:43:54.760573   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:43:54.777182   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:43:54.793800   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:43:54.810385   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:43:54.826768   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:43:54.843784   28750 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:43:54.860371   28750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:43:54.873089   28750 ssh_runner.go:195] Run: openssl version
	I0728 15:43:54.878350   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:43:54.886133   28750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:43:54.889944   28750 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:43:54.889982   28750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:43:54.896504   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:43:54.903918   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:43:54.911623   28750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:43:54.915545   28750 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:43:54.915585   28750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:43:54.920977   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:43:54.928142   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:43:54.935893   28750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:43:54.939932   28750 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:43:54.939977   28750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:43:54.945076   28750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:43:54.952023   28750 kubeadm.go:395] StartCluster: {Name:old-k8s-version-20220728153807-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220728153807-12923 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fa
lse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:43:54.952124   28750 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:43:54.982413   28750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:43:54.990129   28750 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 15:43:54.990147   28750 kubeadm.go:626] restartCluster start
	I0728 15:43:54.990193   28750 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 15:43:54.997084   28750 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:54.997139   28750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-20220728153807-12923
	I0728 15:43:55.061683   28750 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220728153807-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:43:55.061868   28750 kubeconfig.go:127] "old-k8s-version-20220728153807-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 15:43:55.062205   28750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:43:55.063638   28750 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 15:43:55.071259   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.071320   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.079503   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:55.397737   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:43:57.399531   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:43:55.280076   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.280184   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.290411   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:55.481690   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.481806   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.492191   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:55.681640   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.681852   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.693077   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:55.881629   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:55.881805   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:55.893813   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.081620   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.081769   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.092929   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.281611   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.281821   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.292761   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.479869   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.480047   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.490772   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.679673   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.679846   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.690437   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:56.881685   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:56.881791   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:56.892358   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.079845   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.079982   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.090531   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.280055   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.280190   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.291095   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.480150   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.480244   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.492691   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.681615   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.681760   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.693150   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:57.881328   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:57.881469   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:57.892688   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:58.081706   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:58.081861   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:58.093332   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:58.093342   28750 api_server.go:165] Checking apiserver status ...
	I0728 15:43:58.093387   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:43:58.101659   28750 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:43:58.101671   28750 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 15:43:58.101676   28750 kubeadm.go:1092] stopping kube-system containers ...
	I0728 15:43:58.101734   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:43:58.130995   28750 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 15:43:58.141397   28750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:43:58.149507   28750 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5751 Jul 28 22:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5787 Jul 28 22:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Jul 28 22:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Jul 28 22:40 /etc/kubernetes/scheduler.conf
	
	I0728 15:43:58.149568   28750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 15:43:58.157415   28750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 15:43:58.165088   28750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 15:43:58.172300   28750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 15:43:58.179816   28750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:43:58.187386   28750 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 15:43:58.187397   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:58.238316   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:59.009658   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:59.230098   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:59.286178   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:43:59.342104   28750 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:43:59.342164   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:43:59.852670   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:43:59.897387   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:02.399723   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:00.352781   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:00.850650   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:01.352768   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:01.850866   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:02.351446   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:02.850606   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:03.351150   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:03.851365   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:04.352535   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:04.852723   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:04.896817   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:06.897354   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:05.350807   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:05.852624   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:06.352589   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:06.851125   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:07.350565   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:07.852643   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:08.350474   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:08.850445   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:09.352534   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:09.850933   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:08.899065   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:11.398260   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:13.398433   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:10.352606   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:10.852619   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:11.350440   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:11.852134   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:12.352473   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:12.851013   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:13.352270   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:13.850370   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:14.350630   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:14.851959   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:15.896934   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:17.897174   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:15.352566   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:15.851616   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:16.350762   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:16.850420   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:17.350313   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:17.852472   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:18.350337   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:18.851370   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:19.350807   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:19.851563   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:19.897590   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:21.897825   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:20.351203   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:20.851730   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:21.350468   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:21.851009   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:22.350371   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:22.850766   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:23.351160   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:23.851721   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:24.351235   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:24.850785   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:24.396108   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:26.398371   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:25.351192   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:25.850201   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:26.350640   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:26.850236   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:27.350168   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:27.850786   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:28.351502   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:28.851514   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:29.350143   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:29.851249   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:28.898205   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:31.394334   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:33.396151   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:30.350104   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:30.850231   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:31.352251   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:31.850849   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:32.350184   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:32.850157   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:33.351061   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:33.850197   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:34.351704   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:34.850967   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:35.396237   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:37.398008   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:35.350170   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:35.852079   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:36.350361   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:36.849970   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:37.352028   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:37.852028   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:38.352103   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:38.850752   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:39.349925   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:39.850497   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:39.897415   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:42.396857   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:40.350260   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:40.852112   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:41.350628   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:41.850335   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:42.350937   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:42.850588   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:43.350213   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:43.851905   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:44.350537   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:44.851886   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:44.896733   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:47.395498   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:45.351362   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:45.850422   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:46.350013   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:46.851847   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:47.350287   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:47.851880   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:48.349946   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:48.850339   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:49.350494   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:49.851141   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:49.397565   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:51.896910   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:50.350171   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:50.849782   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:51.350363   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:51.850156   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:52.351696   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:52.851835   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:53.349667   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:53.851882   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:54.351848   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:54.851044   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:54.397632   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:56.397780   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:58.398020   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:44:55.351691   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:55.851300   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:56.351196   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:56.851744   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:57.351804   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:57.850801   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:58.350639   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:58.851158   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:44:59.349783   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:44:59.382837   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.382851   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:44:59.382917   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:44:59.412464   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.412476   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:44:59.412541   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:44:59.442864   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.442878   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:44:59.442939   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:44:59.474280   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.474292   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:44:59.474350   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:44:59.504175   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.504187   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:44:59.504249   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:44:59.533670   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.533684   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:44:59.533737   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:44:59.565362   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.565374   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:44:59.565431   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:44:59.595139   28750 logs.go:274] 0 containers: []
	W0728 15:44:59.595151   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:44:59.595159   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:44:59.595166   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:44:59.609196   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:44:59.609210   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:00.897095   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:02.897516   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:01.663458   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054270661s)
	I0728 15:45:01.663570   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:01.663577   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:01.703232   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:01.703247   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:01.715560   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:01.715573   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:01.767426   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:04.268324   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:04.349908   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:04.380997   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.381016   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:04.381076   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:04.411821   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.411834   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:04.411892   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:04.441534   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.441546   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:04.441601   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:04.472385   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.472397   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:04.472486   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:04.501753   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.501766   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:04.501827   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:04.536867   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.536880   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:04.536936   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:04.567861   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.567875   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:04.567930   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:04.597628   28750 logs.go:274] 0 containers: []
	W0728 15:45:04.597640   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:04.597647   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:04.597657   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:05.395907   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:07.896645   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:06.654101   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.056467581s)
	I0728 15:45:06.654210   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:06.654217   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:06.694756   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:06.694770   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:06.707257   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:06.707270   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:06.761874   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:06.761884   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:06.761891   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:09.276908   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:09.351563   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:09.386142   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.386155   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:09.386219   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:09.418466   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.418478   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:09.418538   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:09.448308   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.448320   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:09.448380   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:09.479593   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.479607   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:09.479679   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:09.508030   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.508043   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:09.508099   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:09.537779   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.537792   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:09.537846   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:09.566993   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.567006   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:09.567065   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:09.596654   28750 logs.go:274] 0 containers: []
	W0728 15:45:09.596672   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:09.596682   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:09.596738   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:09.649892   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:09.649903   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:09.649919   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:09.664184   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:09.664200   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:09.898006   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:12.395262   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:11.716355   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052177269s)
	I0728 15:45:11.716505   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:11.716513   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:11.755880   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:11.755897   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:14.268633   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:14.349684   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:14.380092   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.380128   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:14.380189   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:14.410724   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.410736   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:14.410797   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:14.439371   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.439384   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:14.439439   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:14.469393   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.469406   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:14.469468   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:14.498223   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.498241   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:14.498310   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:14.527916   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.527928   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:14.527993   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:14.557360   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.557378   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:14.557437   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:14.586000   28750 logs.go:274] 0 containers: []
	W0728 15:45:14.586014   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:14.586021   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:14.586027   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:14.625146   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:14.625158   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:14.637616   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:14.637630   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:14.690046   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:14.690066   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:14.690072   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:14.703985   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:14.703997   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:14.894377   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:17.394590   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:16.758147   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054173236s)
	I0728 15:45:19.260560   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:19.351400   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:19.382794   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.382806   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:19.382867   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:19.412998   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.413010   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:19.413076   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:19.442557   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.442571   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:19.442639   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:19.475183   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.475196   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:19.475261   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:19.505391   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.505404   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:19.505469   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:19.536777   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.536793   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:19.536848   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:19.570024   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.570037   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:19.570094   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:19.599292   28750 logs.go:274] 0 containers: []
	W0728 15:45:19.599304   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:19.599311   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:19.599318   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:19.639705   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:19.639722   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:19.651159   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:19.651172   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:19.703460   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:19.703471   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:19.703478   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:19.717843   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:19.717856   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:19.394793   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:21.397697   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:21.769960   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05212591s)
	I0728 15:45:24.270526   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:24.351276   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:24.382811   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.382824   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:24.382886   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:24.414444   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.414457   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:24.414517   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:24.443832   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.443845   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:24.443908   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:24.474162   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.474175   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:24.474237   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:24.503347   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.503359   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:24.503421   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:24.531984   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.531996   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:24.532053   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:24.562043   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.562057   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:24.562112   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:24.591508   28750 logs.go:274] 0 containers: []
	W0728 15:45:24.591520   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:24.591528   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:24.591535   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:24.631583   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:24.631595   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:24.643477   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:24.643492   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:24.697351   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:24.697362   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:24.697368   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:24.711821   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:24.711834   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:23.897383   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:25.897565   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:28.395376   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:26.770905   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.059093267s)
	I0728 15:45:29.271547   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:29.349224   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:29.380066   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.380080   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:29.380151   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:29.409249   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.409261   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:29.409319   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:29.437151   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.437169   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:29.437240   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:29.467091   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.467103   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:29.467161   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:29.497532   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.497549   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:29.497615   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:29.526724   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.526737   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:29.526795   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:29.555433   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.555447   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:29.555505   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:29.584958   28750 logs.go:274] 0 containers: []
	W0728 15:45:29.584972   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:29.584981   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:29.584988   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:29.624109   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:29.624122   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:29.635456   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:29.635476   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:29.687908   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:29.687924   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:29.687931   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:29.702012   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:29.702024   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:30.396084   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:32.893828   28466 pod_ready.go:102] pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace has status "Ready":"False"
	I0728 15:45:31.757527   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055526095s)
	I0728 15:45:34.258583   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:34.349159   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:34.379696   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.379712   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:34.379777   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:34.409678   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.409691   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:34.409750   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:34.448652   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.448666   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:34.448783   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:34.481247   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.481260   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:34.481331   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:34.515888   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.515900   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:34.515957   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:34.546279   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.546293   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:34.546361   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:34.578942   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.578959   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:34.579027   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:34.610475   28750 logs.go:274] 0 containers: []
	W0728 15:45:34.610486   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:34.610493   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:34.610500   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:34.657901   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:34.657920   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:34.671775   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:34.671798   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:34.725845   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:34.725862   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:34.725869   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:34.743490   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:34.743511   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:33.889657   28466 pod_ready.go:81] duration metric: took 4m0.005353913s waiting for pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace to be "Ready" ...
	E0728 15:45:33.889681   28466 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-2gxt5" in "kube-system" namespace to be "Ready" (will not retry!)
	I0728 15:45:33.889764   28466 pod_ready.go:38] duration metric: took 4m15.55855467s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:45:33.889804   28466 kubeadm.go:630] restartCluster took 4m24.926328202s
	W0728 15:45:33.889929   28466 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0728 15:45:33.889957   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0728 15:45:36.339867   28466 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.449910034s)
	I0728 15:45:36.339927   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:45:36.349324   28466 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:45:36.356429   28466 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:45:36.356476   28466 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:45:36.363628   28466 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:45:36.363652   28466 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:45:36.642620   28466 out.go:204]   - Generating certificates and keys ...
	I0728 15:45:37.767400   28466 out.go:204]   - Booting up control plane ...
	I0728 15:45:36.796303   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05281482s)
	I0728 15:45:39.297144   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:39.349206   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:39.384998   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.385012   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:39.385074   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:39.415143   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.415155   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:39.415212   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:39.455721   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.455742   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:39.455813   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:39.486528   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.486545   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:39.486610   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:39.514977   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.514990   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:39.515048   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:39.550354   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.550367   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:39.550435   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:39.583427   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.583445   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:39.583507   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:39.613948   28750 logs.go:274] 0 containers: []
	W0728 15:45:39.613963   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:39.613970   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:39.613976   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:41.665141   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051188026s)
	I0728 15:45:41.665254   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:41.665262   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:41.703690   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:41.703705   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:41.715446   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:41.715461   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:41.769895   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:41.769906   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:41.769913   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:44.283371   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:44.350279   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:44.381088   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.381106   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:44.381177   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:44.410783   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.410796   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:44.410859   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:44.439499   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.439511   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:44.439565   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:44.468617   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.468631   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:44.468687   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:44.502836   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.502850   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:44.502906   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:44.531631   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.531645   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:44.531710   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:44.562770   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.562782   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:44.562843   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:44.590589   28750 logs.go:274] 0 containers: []
	W0728 15:45:44.590605   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:44.590612   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:44.590619   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:44.630687   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:44.630701   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:44.643944   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:44.643958   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:44.697537   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:44.697552   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:44.697560   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:44.711695   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:44.711708   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:44.826788   28466 out.go:204]   - Configuring RBAC rules ...
	I0728 15:45:45.232252   28466 cni.go:95] Creating CNI manager for ""
	I0728 15:45:45.232266   28466 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:45:45.232286   28466 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 15:45:45.232379   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:45.232384   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551 minikube.k8s.io/name=no-preload-20220728153949-12923 minikube.k8s.io/updated_at=2022_07_28T15_45_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:45.244124   28466 ops.go:34] apiserver oom_adj: -16
	I0728 15:45:45.358591   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:45.913296   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:46.413506   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:46.912821   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:47.413547   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:47.913026   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:48.413424   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:46.766195   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054508784s)
	I0728 15:45:49.266834   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:49.350965   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:49.381946   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.381958   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:49.382017   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:49.411642   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.411655   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:49.411712   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:49.443920   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.443931   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:49.443989   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:49.489604   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.489617   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:49.489677   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:49.521878   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.521891   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:49.521946   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:49.550505   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.550518   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:49.550579   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:49.578158   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.578171   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:49.578228   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:49.606569   28750 logs.go:274] 0 containers: []
	W0728 15:45:49.606582   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:49.606589   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:49.606596   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:49.647420   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:49.647434   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:49.659418   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:49.659430   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:49.712728   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:49.712739   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:49.712748   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:49.726477   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:49.726490   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:48.914841   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:49.412700   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:49.914899   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:50.413335   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:50.912856   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:51.412923   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:51.912900   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:52.413219   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:52.912789   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:53.412866   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:51.782399   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055932042s)
	I0728 15:45:54.282734   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:54.350836   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:54.380911   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.380923   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:54.380988   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:54.409653   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.409665   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:54.409728   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:54.437934   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.437948   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:54.438009   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:54.469669   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.469682   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:54.469762   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:54.497866   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.497878   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:54.497939   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:54.527154   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.527166   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:54.527225   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:54.555859   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.555872   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:54.555929   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:54.585491   28750 logs.go:274] 0 containers: []
	W0728 15:45:54.585508   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:54.585515   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:54.585527   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:54.638036   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:54.638054   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:54.638060   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:54.651690   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:54.651703   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:53.913846   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:54.412622   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:54.914082   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:55.412652   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:55.914738   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:56.413346   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:56.912748   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:57.413739   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:57.914229   28466 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:45:57.989686   28466 kubeadm.go:1045] duration metric: took 12.757584516s to wait for elevateKubeSystemPrivileges.
	I0728 15:45:57.989703   28466 kubeadm.go:397] StartCluster complete in 4m49.064424466s
	I0728 15:45:57.989718   28466 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:45:57.989792   28466 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:45:57.990324   28466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:45:58.526817   28466 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20220728153949-12923" rescaled to 1
	I0728 15:45:58.526854   28466 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 15:45:58.526861   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 15:45:58.526878   28466 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0728 15:45:58.527016   28466 config.go:178] Loaded profile config "no-preload-20220728153949-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:45:58.548502   28466 out.go:177] * Verifying Kubernetes components...
	I0728 15:45:58.548569   28466 addons.go:65] Setting default-storageclass=true in profile "no-preload-20220728153949-12923"
	I0728 15:45:58.589985   28466 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20220728153949-12923"
	I0728 15:45:58.548566   28466 addons.go:65] Setting storage-provisioner=true in profile "no-preload-20220728153949-12923"
	I0728 15:45:58.590013   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:45:58.590027   28466 addons.go:153] Setting addon storage-provisioner=true in "no-preload-20220728153949-12923"
	W0728 15:45:58.590040   28466 addons.go:162] addon storage-provisioner should already be in state true
	I0728 15:45:58.548579   28466 addons.go:65] Setting dashboard=true in profile "no-preload-20220728153949-12923"
	I0728 15:45:58.590071   28466 addons.go:153] Setting addon dashboard=true in "no-preload-20220728153949-12923"
	W0728 15:45:58.590082   28466 addons.go:162] addon dashboard should already be in state true
	I0728 15:45:58.590082   28466 host.go:66] Checking if "no-preload-20220728153949-12923" exists ...
	I0728 15:45:58.548577   28466 addons.go:65] Setting metrics-server=true in profile "no-preload-20220728153949-12923"
	I0728 15:45:58.590115   28466 addons.go:153] Setting addon metrics-server=true in "no-preload-20220728153949-12923"
	I0728 15:45:58.590118   28466 host.go:66] Checking if "no-preload-20220728153949-12923" exists ...
	I0728 15:45:58.580026   28466 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W0728 15:45:58.590126   28466 addons.go:162] addon metrics-server should already be in state true
	I0728 15:45:58.590185   28466 host.go:66] Checking if "no-preload-20220728153949-12923" exists ...
	I0728 15:45:58.590401   28466 cli_runner.go:164] Run: docker container inspect no-preload-20220728153949-12923 --format={{.State.Status}}
	I0728 15:45:58.590546   28466 cli_runner.go:164] Run: docker container inspect no-preload-20220728153949-12923 --format={{.State.Status}}
	I0728 15:45:58.591079   28466 cli_runner.go:164] Run: docker container inspect no-preload-20220728153949-12923 --format={{.State.Status}}
	I0728 15:45:58.594850   28466 cli_runner.go:164] Run: docker container inspect no-preload-20220728153949-12923 --format={{.State.Status}}
	I0728 15:45:58.695649   28466 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0728 15:45:58.754046   28466 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 15:45:58.790960   28466 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0728 15:45:58.812176   28466 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0728 15:45:58.818778   28466 addons.go:153] Setting addon default-storageclass=true in "no-preload-20220728153949-12923"
	I0728 15:45:56.704258   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052577476s)
	I0728 15:45:56.704368   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:56.704375   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:56.743901   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:56.743916   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:59.255613   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:45:59.348618   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:45:59.378663   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.378676   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:45:59.378733   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:45:59.407038   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.407050   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:45:59.407106   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:45:59.450158   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.450182   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:45:59.450263   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:45:59.481564   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.481576   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:45:59.481635   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:45:59.509158   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.509171   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:45:59.509229   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:45:59.547552   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.547570   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:45:59.547643   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:45:59.578542   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.578554   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:45:59.578613   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:45:59.606863   28750 logs.go:274] 0 containers: []
	W0728 15:45:59.606876   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:45:59.606883   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:45:59.606892   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:45:59.649194   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:45:59.649222   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:45:59.663803   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:45:59.663819   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:45:59.714772   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:45:59.714789   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:45:59.714798   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:45:59.734190   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:45:59.734229   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:45:58.849081   28466 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0728 15:45:58.907117   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	W0728 15:45:58.869947   28466 addons.go:162] addon default-storageclass should already be in state true
	I0728 15:45:58.907160   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0728 15:45:58.907170   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0728 15:45:58.870036   28466 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:45:58.907185   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 15:45:58.907197   28466 host.go:66] Checking if "no-preload-20220728153949-12923" exists ...
	I0728 15:45:58.907197   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220728153949-12923
	I0728 15:45:58.907247   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220728153949-12923
	I0728 15:45:58.907251   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220728153949-12923
	I0728 15:45:58.912038   28466 cli_runner.go:164] Run: docker container inspect no-preload-20220728153949-12923 --format={{.State.Status}}
	I0728 15:45:58.994544   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58932 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/no-preload-20220728153949-12923/id_rsa Username:docker}
	I0728 15:45:58.997290   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58932 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/no-preload-20220728153949-12923/id_rsa Username:docker}
	I0728 15:45:58.997437   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58932 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/no-preload-20220728153949-12923/id_rsa Username:docker}
	I0728 15:45:58.999592   28466 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 15:45:58.999603   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 15:45:58.999748   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220728153949-12923
	I0728 15:45:59.080553   28466 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58932 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/no-preload-20220728153949-12923/id_rsa Username:docker}
	I0728 15:45:59.138372   28466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:45:59.143860   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0728 15:45:59.143874   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0728 15:45:59.230878   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0728 15:45:59.230896   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0728 15:45:59.247435   28466 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0728 15:45:59.247449   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0728 15:45:59.250912   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0728 15:45:59.250929   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0728 15:45:59.329223   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0728 15:45:59.329241   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0728 15:45:59.331623   28466 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0728 15:45:59.331638   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0728 15:45:59.347749   28466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 15:45:59.429934   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0728 15:45:59.429948   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0728 15:45:59.431847   28466 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 15:45:59.431861   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0728 15:45:59.450115   28466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 15:45:59.459293   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0728 15:45:59.459312   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0728 15:45:59.545816   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0728 15:45:59.545840   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0728 15:45:59.655127   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0728 15:45:59.655145   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0728 15:45:59.740993   28466 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 15:45:59.741009   28466 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0728 15:45:59.822122   28466 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 15:45:59.832268   28466 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.242127417s)
	I0728 15:45:59.832287   28466 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.242274949s)
	I0728 15:45:59.832293   28466 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0728 15:45:59.832396   28466 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-20220728153949-12923
	I0728 15:45:59.900849   28466 node_ready.go:35] waiting up to 6m0s for node "no-preload-20220728153949-12923" to be "Ready" ...
	I0728 15:45:59.917729   28466 node_ready.go:49] node "no-preload-20220728153949-12923" has status "Ready":"True"
	I0728 15:45:59.917739   28466 node_ready.go:38] duration metric: took 16.86682ms waiting for node "no-preload-20220728153949-12923" to be "Ready" ...
	I0728 15:45:59.917744   28466 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:45:59.927582   28466 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9vjb2" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:00.224380   28466 addons.go:383] Verifying addon metrics-server=true in "no-preload-20220728153949-12923"
	I0728 15:46:00.442341   28466 pod_ready.go:92] pod "coredns-6d4b75cb6d-9vjb2" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:00.442370   28466 pod_ready.go:81] duration metric: took 514.770942ms waiting for pod "coredns-6d4b75cb6d-9vjb2" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:00.442381   28466 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-kv2dp" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:00.470970   28466 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0728 15:46:00.507076   28466 addons.go:414] enableAddons completed in 1.980228842s
	I0728 15:46:02.457821   28466 pod_ready.go:102] pod "coredns-6d4b75cb6d-kv2dp" in "kube-system" namespace has status "Ready":"False"
	I0728 15:46:01.794612   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.060404071s)
	I0728 15:46:04.294986   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:04.348576   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:04.378484   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.378498   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:04.378561   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:04.406624   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.406636   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:04.406692   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:04.445898   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.445930   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:04.445992   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:04.489972   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.489989   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:04.490075   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:04.530482   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.530498   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:04.530561   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:04.563512   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.563527   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:04.563586   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:04.597809   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.597825   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:04.597888   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:04.635527   28750 logs.go:274] 0 containers: []
	W0728 15:46:04.635544   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:04.635553   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:04.635560   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:04.648400   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:04.648417   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:04.714199   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:04.714221   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:04.714234   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:04.731052   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:04.731068   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:04.459041   28466 pod_ready.go:92] pod "coredns-6d4b75cb6d-kv2dp" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:04.459057   28466 pod_ready.go:81] duration metric: took 4.01673536s waiting for pod "coredns-6d4b75cb6d-kv2dp" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.459065   28466 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.464287   28466 pod_ready.go:92] pod "etcd-no-preload-20220728153949-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:04.464297   28466 pod_ready.go:81] duration metric: took 5.228547ms waiting for pod "etcd-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.464305   28466 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.469967   28466 pod_ready.go:92] pod "kube-apiserver-no-preload-20220728153949-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:04.469978   28466 pod_ready.go:81] duration metric: took 5.669404ms waiting for pod "kube-apiserver-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.469985   28466 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.475157   28466 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220728153949-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:04.475166   28466 pod_ready.go:81] duration metric: took 5.176625ms waiting for pod "kube-controller-manager-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.475176   28466 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wnfz5" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.480189   28466 pod_ready.go:92] pod "kube-proxy-wnfz5" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:04.480200   28466 pod_ready.go:81] duration metric: took 5.019746ms waiting for pod "kube-proxy-wnfz5" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.480209   28466 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.854654   28466 pod_ready.go:92] pod "kube-scheduler-no-preload-20220728153949-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:46:04.854669   28466 pod_ready.go:81] duration metric: took 374.456191ms waiting for pod "kube-scheduler-no-preload-20220728153949-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:46:04.854674   28466 pod_ready.go:38] duration metric: took 4.937005546s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:46:04.854690   28466 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:46:04.854739   28466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:04.914822   28466 api_server.go:71] duration metric: took 6.388056042s to wait for apiserver process to appear ...
	I0728 15:46:04.914839   28466 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:46:04.914846   28466 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:58936/healthz ...
	I0728 15:46:04.925933   28466 api_server.go:266] https://127.0.0.1:58936/healthz returned 200:
	ok
	I0728 15:46:04.927291   28466 api_server.go:140] control plane version: v1.24.3
	I0728 15:46:04.927300   28466 api_server.go:130] duration metric: took 12.457178ms to wait for apiserver health ...
	I0728 15:46:04.927305   28466 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:46:05.059616   28466 system_pods.go:59] 9 kube-system pods found
	I0728 15:46:05.059631   28466 system_pods.go:61] "coredns-6d4b75cb6d-9vjb2" [e7ff5e7b-3b27-4312-a39d-a807cc162175] Running
	I0728 15:46:05.059635   28466 system_pods.go:61] "coredns-6d4b75cb6d-kv2dp" [f57a996a-e1a8-4e75-a619-5671e5398a85] Running
	I0728 15:46:05.059639   28466 system_pods.go:61] "etcd-no-preload-20220728153949-12923" [1e755c7d-94af-466b-b973-367fe022b2ec] Running
	I0728 15:46:05.059644   28466 system_pods.go:61] "kube-apiserver-no-preload-20220728153949-12923" [f45ada08-8ab3-4173-af85-c6c94912703a] Running
	I0728 15:46:05.059652   28466 system_pods.go:61] "kube-controller-manager-no-preload-20220728153949-12923" [14416218-b978-469c-96bb-e5ef5165ea3e] Running
	I0728 15:46:05.059658   28466 system_pods.go:61] "kube-proxy-wnfz5" [4bd8afa6-e125-44b3-b396-bcec5dc95ab3] Running
	I0728 15:46:05.059664   28466 system_pods.go:61] "kube-scheduler-no-preload-20220728153949-12923" [36649128-eaef-4cb3-93e1-a52797fdea9c] Running
	I0728 15:46:05.059670   28466 system_pods.go:61] "metrics-server-5c6f97fb75-gkqvh" [44470584-40a9-4bb2-8cff-f06ef3e04c5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:46:05.059676   28466 system_pods.go:61] "storage-provisioner" [73a9ba2c-55f8-4a8f-97d0-40aee2a41f7c] Running
	I0728 15:46:05.059680   28466 system_pods.go:74] duration metric: took 132.374042ms to wait for pod list to return data ...
	I0728 15:46:05.059685   28466 default_sa.go:34] waiting for default service account to be created ...
	I0728 15:46:05.254711   28466 default_sa.go:45] found service account: "default"
	I0728 15:46:05.254723   28466 default_sa.go:55] duration metric: took 195.036763ms for default service account to be created ...
	I0728 15:46:05.254728   28466 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 15:46:05.457897   28466 system_pods.go:86] 9 kube-system pods found
	I0728 15:46:05.457911   28466 system_pods.go:89] "coredns-6d4b75cb6d-9vjb2" [e7ff5e7b-3b27-4312-a39d-a807cc162175] Running
	I0728 15:46:05.457917   28466 system_pods.go:89] "coredns-6d4b75cb6d-kv2dp" [f57a996a-e1a8-4e75-a619-5671e5398a85] Running
	I0728 15:46:05.457923   28466 system_pods.go:89] "etcd-no-preload-20220728153949-12923" [1e755c7d-94af-466b-b973-367fe022b2ec] Running
	I0728 15:46:05.457926   28466 system_pods.go:89] "kube-apiserver-no-preload-20220728153949-12923" [f45ada08-8ab3-4173-af85-c6c94912703a] Running
	I0728 15:46:05.457930   28466 system_pods.go:89] "kube-controller-manager-no-preload-20220728153949-12923" [14416218-b978-469c-96bb-e5ef5165ea3e] Running
	I0728 15:46:05.457934   28466 system_pods.go:89] "kube-proxy-wnfz5" [4bd8afa6-e125-44b3-b396-bcec5dc95ab3] Running
	I0728 15:46:05.457940   28466 system_pods.go:89] "kube-scheduler-no-preload-20220728153949-12923" [36649128-eaef-4cb3-93e1-a52797fdea9c] Running
	I0728 15:46:05.457947   28466 system_pods.go:89] "metrics-server-5c6f97fb75-gkqvh" [44470584-40a9-4bb2-8cff-f06ef3e04c5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:46:05.457952   28466 system_pods.go:89] "storage-provisioner" [73a9ba2c-55f8-4a8f-97d0-40aee2a41f7c] Running
	I0728 15:46:05.457957   28466 system_pods.go:126] duration metric: took 203.229084ms to wait for k8s-apps to be running ...
	I0728 15:46:05.457961   28466 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 15:46:05.458012   28466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:46:05.468105   28466 system_svc.go:56] duration metric: took 10.139358ms WaitForService to wait for kubelet.
	I0728 15:46:05.468120   28466 kubeadm.go:572] duration metric: took 6.941365778s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0728 15:46:05.468140   28466 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:46:05.655963   28466 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:46:05.655975   28466 node_conditions.go:123] node cpu capacity is 6
	I0728 15:46:05.655988   28466 node_conditions.go:105] duration metric: took 187.840999ms to run NodePressure ...
	I0728 15:46:05.655998   28466 start.go:216] waiting for startup goroutines ...
	I0728 15:46:05.687111   28466 start.go:506] kubectl: 1.24.2, cluster: 1.24.3 (minor skew: 0)
	I0728 15:46:05.711082   28466 out.go:177] * Done! kubectl is now configured to use "no-preload-20220728153949-12923" cluster and "default" namespace by default
	I0728 15:46:06.793258   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.062211512s)
	I0728 15:46:06.793371   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:06.793474   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:09.339612   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:09.848472   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:09.877399   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.877411   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:09.877472   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:09.906396   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.906414   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:09.906480   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:09.936854   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.936869   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:09.936928   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:09.966233   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.966249   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:09.966315   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:09.996992   28750 logs.go:274] 0 containers: []
	W0728 15:46:09.997005   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:09.997065   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:10.033579   28750 logs.go:274] 0 containers: []
	W0728 15:46:10.033593   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:10.033650   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:10.069419   28750 logs.go:274] 0 containers: []
	W0728 15:46:10.069433   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:10.069498   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:10.099084   28750 logs.go:274] 0 containers: []
	W0728 15:46:10.099097   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:10.099104   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:10.099112   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:10.112767   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:10.112787   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:10.173268   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:10.173288   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:10.173301   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:10.188909   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:10.188923   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:12.242044   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053144227s)
	I0728 15:46:12.242152   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:12.242158   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:14.784324   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:14.850444   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:14.882027   28750 logs.go:274] 0 containers: []
	W0728 15:46:14.882040   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:14.882097   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:14.912290   28750 logs.go:274] 0 containers: []
	W0728 15:46:14.912303   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:14.912361   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:14.946389   28750 logs.go:274] 0 containers: []
	W0728 15:46:14.946410   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:14.946488   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:14.978870   28750 logs.go:274] 0 containers: []
	W0728 15:46:14.978883   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:14.978943   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:15.008965   28750 logs.go:274] 0 containers: []
	W0728 15:46:15.008978   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:15.009036   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:15.037778   28750 logs.go:274] 0 containers: []
	W0728 15:46:15.037792   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:15.037852   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:15.066142   28750 logs.go:274] 0 containers: []
	W0728 15:46:15.066154   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:15.066212   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:15.097151   28750 logs.go:274] 0 containers: []
	W0728 15:46:15.097164   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:15.097172   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:15.097179   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:15.139648   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:15.168589   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:15.186371   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:15.186386   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:15.242465   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:15.242479   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:15.242491   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:15.256100   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:15.256112   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:17.308800   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052710598s)
	I0728 15:46:19.811135   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:19.848655   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:19.879363   28750 logs.go:274] 0 containers: []
	W0728 15:46:19.879375   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:19.879433   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:19.909343   28750 logs.go:274] 0 containers: []
	W0728 15:46:19.909355   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:19.909414   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:19.938912   28750 logs.go:274] 0 containers: []
	W0728 15:46:19.938925   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:19.938985   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:19.975343   28750 logs.go:274] 0 containers: []
	W0728 15:46:19.975357   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:19.975425   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:20.008264   28750 logs.go:274] 0 containers: []
	W0728 15:46:20.008275   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:20.008331   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:20.038658   28750 logs.go:274] 0 containers: []
	W0728 15:46:20.038670   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:20.038723   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:20.070456   28750 logs.go:274] 0 containers: []
	W0728 15:46:20.070470   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:20.070534   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:20.102016   28750 logs.go:274] 0 containers: []
	W0728 15:46:20.102029   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:20.102037   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:20.102046   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:20.114591   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:20.114610   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:20.174188   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:20.174201   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:20.174208   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:20.189645   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:20.189663   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:22.250872   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.061232319s)
	I0728 15:46:22.250983   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:22.250991   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:24.791788   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:24.849171   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:24.880694   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.880706   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:24.880760   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:24.908813   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.908826   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:24.908880   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:24.937412   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.937425   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:24.937484   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:24.966808   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.966819   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:24.966880   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:24.996939   28750 logs.go:274] 0 containers: []
	W0728 15:46:24.996952   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:24.997013   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:25.025856   28750 logs.go:274] 0 containers: []
	W0728 15:46:25.025868   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:25.025927   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:25.054899   28750 logs.go:274] 0 containers: []
	W0728 15:46:25.054911   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:25.054970   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:25.083700   28750 logs.go:274] 0 containers: []
	W0728 15:46:25.083712   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:25.083720   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:25.083729   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:25.097410   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:25.097423   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:27.151701   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054299606s)
	I0728 15:46:27.151808   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:27.151815   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:27.192088   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:27.192102   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:27.203829   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:27.203842   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:27.257399   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:29.758384   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:29.848749   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:29.880250   28750 logs.go:274] 0 containers: []
	W0728 15:46:29.880262   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:29.880318   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:29.910202   28750 logs.go:274] 0 containers: []
	W0728 15:46:29.910215   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:29.910271   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:29.940618   28750 logs.go:274] 0 containers: []
	W0728 15:46:29.940632   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:29.940699   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:29.971567   28750 logs.go:274] 0 containers: []
	W0728 15:46:29.971583   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:29.971645   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:30.004734   28750 logs.go:274] 0 containers: []
	W0728 15:46:30.004750   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:30.004814   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:30.036150   28750 logs.go:274] 0 containers: []
	W0728 15:46:30.036164   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:30.036234   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:30.066088   28750 logs.go:274] 0 containers: []
	W0728 15:46:30.066101   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:30.066156   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:30.095216   28750 logs.go:274] 0 containers: []
	W0728 15:46:30.095228   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:30.095235   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:30.095242   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:30.148425   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:30.152196   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:30.152207   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:30.165693   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:30.165704   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:32.216553   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050871151s)
	I0728 15:46:32.216665   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:32.216673   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:32.259143   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:32.259161   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:34.771261   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:34.850228   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:34.881646   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.881658   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:34.881714   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:34.911053   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.911065   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:34.911120   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:34.940187   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.940199   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:34.940257   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:34.968953   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.968965   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:34.969022   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:34.999346   28750 logs.go:274] 0 containers: []
	W0728 15:46:34.999359   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:34.999415   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:35.028920   28750 logs.go:274] 0 containers: []
	W0728 15:46:35.028933   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:35.028991   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:35.058519   28750 logs.go:274] 0 containers: []
	W0728 15:46:35.058531   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:35.058589   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:35.087805   28750 logs.go:274] 0 containers: []
	W0728 15:46:35.087817   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:35.087824   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:35.087831   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:35.127597   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:35.127610   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:35.140602   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:35.151800   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:35.210991   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:35.211004   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:35.211011   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:35.227071   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:35.227085   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:37.280866   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053801834s)
	I0728 15:46:39.781106   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:39.848000   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:39.879394   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.879406   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:39.879461   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:39.909065   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.909077   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:39.909133   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:39.938272   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.938283   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:39.938346   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:39.967027   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.967044   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:39.967102   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:39.996593   28750 logs.go:274] 0 containers: []
	W0728 15:46:39.996605   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:39.996661   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:40.025955   28750 logs.go:274] 0 containers: []
	W0728 15:46:40.025967   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:40.026023   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:40.054606   28750 logs.go:274] 0 containers: []
	W0728 15:46:40.054618   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:40.054677   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:40.083931   28750 logs.go:274] 0 containers: []
	W0728 15:46:40.083944   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:40.083951   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:40.083958   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:40.122714   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:40.122727   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:40.133764   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:40.151970   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:40.205103   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:40.205113   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:40.205125   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:40.218748   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:40.218759   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:42.277646   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058908911s)
	I0728 15:46:44.779464   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:44.849951   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:44.881165   28750 logs.go:274] 0 containers: []
	W0728 15:46:44.881178   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:44.881238   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:44.909841   28750 logs.go:274] 0 containers: []
	W0728 15:46:44.909855   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:44.909917   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:44.941101   28750 logs.go:274] 0 containers: []
	W0728 15:46:44.941114   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:44.941179   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:44.972307   28750 logs.go:274] 0 containers: []
	W0728 15:46:44.972320   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:44.972376   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:45.006437   28750 logs.go:274] 0 containers: []
	W0728 15:46:45.006450   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:45.006508   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:45.036116   28750 logs.go:274] 0 containers: []
	W0728 15:46:45.036128   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:45.036185   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:45.064214   28750 logs.go:274] 0 containers: []
	W0728 15:46:45.064226   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:45.064286   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:45.093400   28750 logs.go:274] 0 containers: []
	W0728 15:46:45.093414   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:45.093420   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:45.093427   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:45.107382   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:45.107395   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:47.162864   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055491977s)
	I0728 15:46:47.162967   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:47.162974   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:47.205000   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:47.205023   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:47.216942   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:47.216956   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:47.269215   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:49.769983   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:49.849883   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:49.880517   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.880530   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:49.880587   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:49.908888   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.908904   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:49.908964   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:49.937900   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.937914   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:49.937975   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:49.966223   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.966236   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:49.966292   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:49.995275   28750 logs.go:274] 0 containers: []
	W0728 15:46:49.995288   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:49.995344   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:50.025324   28750 logs.go:274] 0 containers: []
	W0728 15:46:50.025338   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:50.025396   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:50.054609   28750 logs.go:274] 0 containers: []
	W0728 15:46:50.054621   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:50.054679   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:50.082727   28750 logs.go:274] 0 containers: []
	W0728 15:46:50.082739   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:50.082746   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:50.082753   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:50.134737   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:50.151600   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:50.151609   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:50.166276   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:50.166289   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:52.220560   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054292421s)
	I0728 15:46:52.220667   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:52.220673   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:52.259245   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:52.259258   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:54.773839   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:54.847624   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:54.877686   28750 logs.go:274] 0 containers: []
	W0728 15:46:54.877698   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:54.877752   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:54.908194   28750 logs.go:274] 0 containers: []
	W0728 15:46:54.908206   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:54.908265   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:54.942839   28750 logs.go:274] 0 containers: []
	W0728 15:46:54.942851   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:54.942904   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:54.977060   28750 logs.go:274] 0 containers: []
	W0728 15:46:54.977072   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:54.977129   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:46:55.008268   28750 logs.go:274] 0 containers: []
	W0728 15:46:55.008285   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:46:55.008356   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:46:55.039796   28750 logs.go:274] 0 containers: []
	W0728 15:46:55.039809   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:46:55.039870   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:46:55.070921   28750 logs.go:274] 0 containers: []
	W0728 15:46:55.070933   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:46:55.070992   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:46:55.102136   28750 logs.go:274] 0 containers: []
	W0728 15:46:55.102153   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:46:55.102162   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:46:55.102171   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:46:55.144328   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:46:55.153238   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:46:55.166460   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:46:55.166474   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:46:55.223089   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:46:55.223101   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:46:55.223110   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0728 15:46:55.237281   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:46:55.237300   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:46:57.291911   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054631983s)
	I0728 15:46:59.792635   28750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:46:59.849540   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:46:59.895169   28750 logs.go:274] 0 containers: []
	W0728 15:46:59.895184   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:46:59.895245   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:46:59.928773   28750 logs.go:274] 0 containers: []
	W0728 15:46:59.928796   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:46:59.928862   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:46:59.958330   28750 logs.go:274] 0 containers: []
	W0728 15:46:59.958343   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:46:59.958400   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:46:59.995745   28750 logs.go:274] 0 containers: []
	W0728 15:46:59.995760   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:46:59.995825   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:47:00.026935   28750 logs.go:274] 0 containers: []
	W0728 15:47:00.026948   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:47:00.027009   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:47:00.060788   28750 logs.go:274] 0 containers: []
	W0728 15:47:00.060809   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:47:00.060874   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:47:00.093846   28750 logs.go:274] 0 containers: []
	W0728 15:47:00.093860   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:47:00.093918   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:47:00.124287   28750 logs.go:274] 0 containers: []
	W0728 15:47:00.124299   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:47:00.124305   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:47:00.124312   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 22:41:05 UTC, end at Thu 2022-07-28 22:47:01 UTC. --
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.475584996Z" level=info msg="ignoring event" container=605b2ce0e3877547d7f292baac0fcf2e1263b131390f1ca138c2eb06923cfb59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.541447594Z" level=info msg="ignoring event" container=e91b81d1dc870aedffddc2202127d5756fd4af9381706f082f6d8a62a9f62a12 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.609312065Z" level=info msg="ignoring event" container=2be3aebac2ad81bdd7ee2c337bf62a2850ddff5d61b9ab278be1b5092161396f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.718623266Z" level=info msg="ignoring event" container=589a60210160b5c8442a1abb2a56aedae556d1fd1e7c7b22987f05d12c55749c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.787457711Z" level=info msg="ignoring event" container=d2180fc16c0104c3a6379715a6bb6b7f3ab3f267d71afb32b792110092fb4f3c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.856272084Z" level=info msg="ignoring event" container=d3ae94d13f0c5715761d7eec855ecf13220f9a6b19b8d12e52e114a01eb0d34d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:35 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:35.931766156Z" level=info msg="ignoring event" container=678b4146bfd645109fab5371c1dd06b1d380caab0f42fe999edbb29ed477cf7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:45:36 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:45:36.029832255Z" level=info msg="ignoring event" container=bc96ab080e11116579e35688fdf8058bf2e051efc5bbe68e9d39116b74c3d360 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:46:01 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:01.010425567Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:01 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:01.010550628Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:01 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:01.011827455Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:01 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:01.777332851Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 28 22:46:04 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:04.682135949Z" level=info msg="ignoring event" container=117cd20a1edcac6e7c1c895a10c59ef9ac9abf4a9af5474436bcee68c60ef85f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:46:04 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:04.846525290Z" level=info msg="ignoring event" container=08104b1f598e18174657701b5f577f440a536335749d61c9ae494cc44c5c01a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:46:07 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:07.374124592Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 28 22:46:07 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:07.671967241Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 28 22:46:10 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:10.940764587Z" level=info msg="ignoring event" container=1428bf62df75e8d6001ac46cbdac462cbe800dd1022ff16dda709abc52f019be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:46:12 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:12.077812782Z" level=info msg="ignoring event" container=d55bb53547bb29e664bf66093cc96b54f0b3eceebd788da28c50d724356a4ec7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:46:15 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:15.209985902Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:15 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:15.210030631Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:15 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:15.211656948Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:57 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:57.707520373Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:57 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:57.707565619Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:57 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:57.782284483Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:46:58 no-preload-20220728153949-12923 dockerd[554]: time="2022-07-28T22:46:58.473565953Z" level=info msg="ignoring event" container=26a7abe6c205b3108a8b2b1a22ebadc9f51bc4789a79eb5947ab998103c5159f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	26a7abe6c205b       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   2                   a612834ac1b25
	a93fb9f591314       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   55 seconds ago       Running             kubernetes-dashboard        0                   5d2b839c3189f
	c56995f33a521       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   ef9726c3d6e93
	932389c13a32e       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   149ae2612e69d
	42b621e0915da       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   1859344ec0c9e
	d8f608d7350f0       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   4144606f15c19
	54f11c6f5e5cc       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   b8468190a3242
	50accddf195e0       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   48931cef22923
	61c5e29a44123       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   a6744ba8b63c2
	
	* 
	* ==> coredns [932389c13a32] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-20220728153949-12923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-20220728153949-12923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
	                    minikube.k8s.io/name=no-preload-20220728153949-12923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_28T15_45_45_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 22:45:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-20220728153949-12923
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 22:46:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Jul 2022 22:46:54 +0000   Thu, 28 Jul 2022 22:45:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Jul 2022 22:46:54 +0000   Thu, 28 Jul 2022 22:45:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Jul 2022 22:46:54 +0000   Thu, 28 Jul 2022 22:45:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Jul 2022 22:46:54 +0000   Thu, 28 Jul 2022 22:46:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    no-preload-20220728153949-12923
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                26065199-fa84-4b4b-8bc9-9762d3650182
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-kv2dp                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-no-preload-20220728153949-12923                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kube-apiserver-no-preload-20220728153949-12923             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-no-preload-20220728153949-12923    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-wnfz5                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-no-preload-20220728153949-12923             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 metrics-server-5c6f97fb75-gkqvh                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         61s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-b6ggg                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-j2qnw                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 63s                kube-proxy       
	  Normal  NodeHasSufficientMemory  83s (x5 over 83s)  kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x5 over 83s)  kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x5 over 83s)  kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasSufficientPID
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  76s                kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasSufficientPID
	  Normal  NodeReady                76s                kubelet          Node no-preload-20220728153949-12923 status is now: NodeReady
	  Normal  RegisteredNode           65s                node-controller  Node no-preload-20220728153949-12923 event: Registered Node no-preload-20220728153949-12923 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet          Node no-preload-20220728153949-12923 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s                 kubelet          Node no-preload-20220728153949-12923 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                7s                 kubelet          Node no-preload-20220728153949-12923 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [50accddf195e] <==
	* {"level":"info","ts":"2022-07-28T22:45:39.157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-07-28T22:45:39.157Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-07-28T22:45:39.159Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-28T22:45:39.159Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:45:39.159Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:45:39.160Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T22:45:39.160Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T22:45:39.552Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:no-preload-20220728153949-12923 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T22:45:39.553Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T22:45:39.554Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T22:45:39.554Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:45:39.555Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:47:01 up  1:07,  0 users,  load average: 0.25, 0.58, 0.90
	Linux no-preload-20220728153949-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [61c5e29a4412] <==
	* I0728 22:45:44.105709       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0728 22:45:45.030702       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0728 22:45:45.036536       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0728 22:45:45.044994       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0728 22:45:45.129299       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0728 22:45:57.291364       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0728 22:45:57.791467       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0728 22:45:58.344864       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0728 22:46:00.203577       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.108.223.49]
	I0728 22:46:00.436966       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.108.173.66]
	I0728 22:46:00.447794       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.99.1.148]
	W0728 22:46:01.063162       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 22:46:01.063200       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0728 22:46:01.063206       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 22:46:01.063172       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 22:46:01.063291       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0728 22:46:01.064311       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 22:47:01.019354       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 22:47:01.019433       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0728 22:47:01.019442       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 22:47:01.020713       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 22:47:01.020855       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0728 22:47:01.020893       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d8f608d7350f] <==
	* I0728 22:45:57.945568       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-kv2dp"
	I0728 22:45:58.035224       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0728 22:45:58.039060       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-9vjb2"
	I0728 22:46:00.054316       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0728 22:46:00.059999       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0728 22:46:00.066379       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0728 22:46:00.130405       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-gkqvh"
	I0728 22:46:00.287582       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0728 22:46:00.330795       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 22:46:00.333076       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0728 22:46:00.335331       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:46:00.337065       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 22:46:00.341631       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 22:46:00.341794       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:46:00.341895       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:46:00.345987       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 22:46:00.346073       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:46:00.346103       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 22:46:00.346121       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:46:00.350841       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:46:00.350904       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 22:46:00.356281       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-b6ggg"
	I0728 22:46:00.363315       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-j2qnw"
	E0728 22:46:54.191769       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0728 22:46:54.195822       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [42b621e0915d] <==
	* I0728 22:45:58.322158       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0728 22:45:58.322216       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0728 22:45:58.322254       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 22:45:58.340362       1 server_others.go:206] "Using iptables Proxier"
	I0728 22:45:58.340399       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 22:45:58.340407       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 22:45:58.340417       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 22:45:58.340436       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:45:58.340801       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:45:58.341165       1 server.go:661] "Version info" version="v1.24.3"
	I0728 22:45:58.341263       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:45:58.342471       1 config.go:317] "Starting service config controller"
	I0728 22:45:58.342503       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 22:45:58.342523       1 config.go:226] "Starting endpoint slice config controller"
	I0728 22:45:58.342566       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 22:45:58.342581       1 config.go:444] "Starting node config controller"
	I0728 22:45:58.342588       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 22:45:58.442681       1 shared_informer.go:262] Caches are synced for node config
	I0728 22:45:58.442743       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 22:45:58.442755       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [54f11c6f5e5c] <==
	* W0728 22:45:42.053936       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0728 22:45:42.054238       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0728 22:45:42.054354       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0728 22:45:42.054367       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0728 22:45:42.055385       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0728 22:45:42.055424       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0728 22:45:42.055482       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0728 22:45:42.055502       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0728 22:45:42.055535       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0728 22:45:42.055630       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0728 22:45:42.055691       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0728 22:45:42.055757       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0728 22:45:42.055699       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0728 22:45:42.055770       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0728 22:45:42.877463       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0728 22:45:42.877514       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0728 22:45:42.970100       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0728 22:45:42.970136       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0728 22:45:42.987042       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0728 22:45:42.987081       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0728 22:45:43.010515       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0728 22:45:43.010550       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0728 22:45:43.053332       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0728 22:45:43.053370       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0728 22:45:43.450786       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 22:41:05 UTC, end at Thu 2022-07-28 22:47:02 UTC. --
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602688    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4bd8afa6-e125-44b3-b396-bcec5dc95ab3-kube-proxy\") pod \"kube-proxy-wnfz5\" (UID: \"4bd8afa6-e125-44b3-b396-bcec5dc95ab3\") " pod="kube-system/kube-proxy-wnfz5"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602734    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg8td\" (UniqueName: \"kubernetes.io/projected/4bd8afa6-e125-44b3-b396-bcec5dc95ab3-kube-api-access-hg8td\") pod \"kube-proxy-wnfz5\" (UID: \"4bd8afa6-e125-44b3-b396-bcec5dc95ab3\") " pod="kube-system/kube-proxy-wnfz5"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602771    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dblsc\" (UniqueName: \"kubernetes.io/projected/ab9a025e-16ee-44e3-a3ee-97431afb9113-kube-api-access-dblsc\") pod \"kubernetes-dashboard-5fd5574d9f-j2qnw\" (UID: \"ab9a025e-16ee-44e3-a3ee-97431afb9113\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-j2qnw"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602790    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9m5x\" (UniqueName: \"kubernetes.io/projected/f57a996a-e1a8-4e75-a619-5671e5398a85-kube-api-access-s9m5x\") pod \"coredns-6d4b75cb6d-kv2dp\" (UID: \"f57a996a-e1a8-4e75-a619-5671e5398a85\") " pod="kube-system/coredns-6d4b75cb6d-kv2dp"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602857    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92g2s\" (UniqueName: \"kubernetes.io/projected/362a0692-30b8-440a-9768-4fd3e312c24d-kube-api-access-92g2s\") pod \"dashboard-metrics-scraper-dffd48c4c-b6ggg\" (UID: \"362a0692-30b8-440a-9768-4fd3e312c24d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-b6ggg"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602906    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bd8afa6-e125-44b3-b396-bcec5dc95ab3-lib-modules\") pod \"kube-proxy-wnfz5\" (UID: \"4bd8afa6-e125-44b3-b396-bcec5dc95ab3\") " pod="kube-system/kube-proxy-wnfz5"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602924    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab9a025e-16ee-44e3-a3ee-97431afb9113-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-j2qnw\" (UID: \"ab9a025e-16ee-44e3-a3ee-97431afb9113\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-j2qnw"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602939    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/73a9ba2c-55f8-4a8f-97d0-40aee2a41f7c-tmp\") pod \"storage-provisioner\" (UID: \"73a9ba2c-55f8-4a8f-97d0-40aee2a41f7c\") " pod="kube-system/storage-provisioner"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602977    9821 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/362a0692-30b8-440a-9768-4fd3e312c24d-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-b6ggg\" (UID: \"362a0692-30b8-440a-9768-4fd3e312c24d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-b6ggg"
	Jul 28 22:46:55 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:55.602999    9821 reconciler.go:157] "Reconciler: start to sync state"
	Jul 28 22:46:56 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:56.762692    9821 request.go:601] Waited for 1.136558002s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 28 22:46:56 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:56.821225    9821 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-no-preload-20220728153949-12923\" already exists" pod="kube-system/kube-controller-manager-no-preload-20220728153949-12923"
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.002012    9821 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-no-preload-20220728153949-12923\" already exists" pod="kube-system/kube-scheduler-no-preload-20220728153949-12923"
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.177693    9821 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-no-preload-20220728153949-12923\" already exists" pod="kube-system/kube-apiserver-no-preload-20220728153949-12923"
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.407750    9821 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-no-preload-20220728153949-12923\" already exists" pod="kube-system/etcd-no-preload-20220728153949-12923"
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.783555    9821 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.783653    9821 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.783811    9821 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dpnz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHa
ndler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:
[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-gkqvh_kube-system(44470584-40a9-4bb2-8cff-f06ef3e04c5a): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 28 22:46:57 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:57.783878    9821 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-gkqvh" podUID=44470584-40a9-4bb2-8cff-f06ef3e04c5a
	Jul 28 22:46:58 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:58.267679    9821 scope.go:110] "RemoveContainer" containerID="d55bb53547bb29e664bf66093cc96b54f0b3eceebd788da28c50d724356a4ec7"
	Jul 28 22:46:58 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:58.651600    9821 scope.go:110] "RemoveContainer" containerID="d55bb53547bb29e664bf66093cc96b54f0b3eceebd788da28c50d724356a4ec7"
	Jul 28 22:46:58 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:46:58.651841    9821 scope.go:110] "RemoveContainer" containerID="26a7abe6c205b3108a8b2b1a22ebadc9f51bc4789a79eb5947ab998103c5159f"
	Jul 28 22:46:58 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:46:58.652022    9821 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-b6ggg_kubernetes-dashboard(362a0692-30b8-440a-9768-4fd3e312c24d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-b6ggg" podUID=362a0692-30b8-440a-9768-4fd3e312c24d
	Jul 28 22:47:01 no-preload-20220728153949-12923 kubelet[9821]: I0728 22:47:01.520949    9821 scope.go:110] "RemoveContainer" containerID="26a7abe6c205b3108a8b2b1a22ebadc9f51bc4789a79eb5947ab998103c5159f"
	Jul 28 22:47:01 no-preload-20220728153949-12923 kubelet[9821]: E0728 22:47:01.521202    9821 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-b6ggg_kubernetes-dashboard(362a0692-30b8-440a-9768-4fd3e312c24d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-b6ggg" podUID=362a0692-30b8-440a-9768-4fd3e312c24d
	
	* 
	* ==> kubernetes-dashboard [a93fb9f59131] <==
	* 2022/07/28 22:46:06 Using namespace: kubernetes-dashboard
	2022/07/28 22:46:06 Using in-cluster config to connect to apiserver
	2022/07/28 22:46:06 Using secret token for csrf signing
	2022/07/28 22:46:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/28 22:46:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/28 22:46:06 Successful initial request to the apiserver, version: v1.24.3
	2022/07/28 22:46:06 Generating JWE encryption key
	2022/07/28 22:46:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/28 22:46:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/28 22:46:07 Initializing JWE encryption key from synchronized object
	2022/07/28 22:46:07 Creating in-cluster Sidecar client
	2022/07/28 22:46:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/28 22:46:07 Serving insecurely on HTTP port: 9090
	2022/07/28 22:46:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/28 22:46:06 Starting overwatch
	
	* 
	* ==> storage-provisioner [c56995f33a52] <==
	* I0728 22:46:00.833319       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 22:46:00.842602       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 22:46:00.842657       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 22:46:00.849866       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 22:46:00.849995       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-20220728153949-12923_7d705e6f-182c-4a75-b5f8-00aedb056239!
	I0728 22:46:00.851067       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30f39fb9-9f84-46b1-a264-2ea8bc0e9187", APIVersion:"v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-20220728153949-12923_7d705e6f-182c-4a75-b5f8-00aedb056239 became leader
	I0728 22:46:00.950760       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-20220728153949-12923_7d705e6f-182c-4a75-b5f8-00aedb056239!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-20220728153949-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-gkqvh
helpers_test.go:272: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context no-preload-20220728153949-12923 describe pod metrics-server-5c6f97fb75-gkqvh
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context no-preload-20220728153949-12923 describe pod metrics-server-5c6f97fb75-gkqvh: exit status 1 (281.474688ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-gkqvh" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context no-preload-20220728153949-12923 describe pod metrics-server-5c6f97fb75-gkqvh: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (43.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0728 15:52:03.852329   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:52:13.967160   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:52:29.345232   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:52:37.920420   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:52:51.969702   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:53:04.771195   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:55:30.168524   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:55:41.926103   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:55:43.552888   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:55:45.925726   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:56:09.609529   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:56:52.837340   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:57:06.600098   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:57:08.972569   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:57:13.962352   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:57:29.342167   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:57:37.915218   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:57:51.965742   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:58:04.768106   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:58:15.887191   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:58:36.590318   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:58:52.424883   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:59:07.109531   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:59:10.578831   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:59:15.019324   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 15:59:59.641264   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 2 (416.961158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-20220728153807-12923" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220728153807-12923
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220728153807-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f",
	        "Created": "2022-07-28T22:38:14.165684968Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246485,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:43:51.426692673Z",
	            "FinishedAt": "2022-07-28T22:43:48.536711569Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hosts",
	        "LogPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f-json.log",
	        "Name": "/old-k8s-version-20220728153807-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220728153807-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220728153807-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220728153807-12923",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220728153807-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220728153807-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "468350ddce385e27616eb7d67f293e8984e4658354bccab9cc7f747311c10282",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58970"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58971"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58972"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58973"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/468350ddce38",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220728153807-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2056d9a86a4c",
	                        "old-k8s-version-20220728153807-12923"
	                    ],
	                    "NetworkID": "a0b55590b406427f4aa9e75be1fbe382dd54fa7a1c14e888e401b45bb478b32d",
	                    "EndpointID": "d3216ca95fb05fd9cb589a1b6ef0ebe5edfacf75863c36ec7c40cddaa73c1dc8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 2 (444.773807ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220728153807-12923 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220728153807-12923 logs -n 25: (3.683443332s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220728155419-12923      | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | disable-driver-mounts-20220728155419-12923        |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:00 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:00 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:01 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT |                     |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 15:55:30
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 15:55:30.758749   30316 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:55:30.758949   30316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:55:30.758955   30316 out.go:309] Setting ErrFile to fd 2...
	I0728 15:55:30.758959   30316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:55:30.759060   30316 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:55:30.759520   30316 out.go:303] Setting JSON to false
	I0728 15:55:30.774489   30316 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9972,"bootTime":1659038958,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:55:30.774588   30316 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:55:30.795830   30316 out.go:177] * [default-k8s-different-port-20220728155420-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:55:30.839027   30316 notify.go:193] Checking for updates...
	I0728 15:55:30.860766   30316 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:55:30.882054   30316 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:55:30.903796   30316 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:55:30.924802   30316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:55:30.946060   30316 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:55:30.968665   30316 config.go:178] Loaded profile config "default-k8s-different-port-20220728155420-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:55:30.969316   30316 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:55:31.036834   30316 docker.go:137] docker version: linux-20.10.17
	I0728 15:55:31.037003   30316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:55:31.170708   30316 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:55:31.104106473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:55:31.213459   30316 out.go:177] * Using the docker driver based on existing profile
	I0728 15:55:31.235399   30316 start.go:284] selected driver: docker
	I0728 15:55:31.235424   30316 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220728155420-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port
-20220728155420-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:55:31.235546   30316 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:55:31.238871   30316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:55:31.370748   30316 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:55:31.306721001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:55:31.370921   30316 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:55:31.370940   30316 cni.go:95] Creating CNI manager for ""
	I0728 15:55:31.370951   30316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:55:31.370961   30316 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220728155420-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728155420-12923 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:55:31.392647   30316 out.go:177] * Starting control plane node default-k8s-different-port-20220728155420-12923 in cluster default-k8s-different-port-20220728155420-12923
	I0728 15:55:31.414823   30316 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:55:31.436734   30316 out.go:177] * Pulling base image ...
	I0728 15:55:31.478779   30316 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:55:31.478823   30316 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:55:31.478857   30316 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 15:55:31.478885   30316 cache.go:57] Caching tarball of preloaded images
	I0728 15:55:31.479127   30316 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:55:31.479764   30316 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 15:55:31.480270   30316 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/config.json ...
	I0728 15:55:31.541959   30316 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:55:31.541977   30316 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:55:31.541987   30316 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:55:31.542029   30316 start.go:370] acquiring machines lock for default-k8s-different-port-20220728155420-12923: {Name:mk0e822f9f2b9adffe1c022a5e24460488a5334a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:55:31.542121   30316 start.go:374] acquired machines lock for "default-k8s-different-port-20220728155420-12923" in 68.722µs
	I0728 15:55:31.542144   30316 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:55:31.542153   30316 fix.go:55] fixHost starting: 
	I0728 15:55:31.542383   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 15:55:31.605720   30316 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220728155420-12923: state=Stopped err=<nil>
	W0728 15:55:31.605754   30316 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:55:31.627632   30316 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220728155420-12923" ...
	I0728 15:55:31.649486   30316 cli_runner.go:164] Run: docker start default-k8s-different-port-20220728155420-12923
	I0728 15:55:31.993066   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 15:55:32.060550   30316 kic.go:415] container "default-k8s-different-port-20220728155420-12923" state is running.
	I0728 15:55:32.061181   30316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.129615   30316 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/config.json ...
	I0728 15:55:32.130011   30316 machine.go:88] provisioning docker machine ...
	I0728 15:55:32.130035   30316 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220728155420-12923"
	I0728 15:55:32.130125   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.198326   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:32.198552   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:32.198567   30316 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220728155420-12923 && echo "default-k8s-different-port-20220728155420-12923" | sudo tee /etc/hostname
	I0728 15:55:32.326053   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220728155420-12923
	
	I0728 15:55:32.326140   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.391769   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:32.391946   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:32.391963   30316 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220728155420-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220728155420-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220728155420-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:55:32.512852   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:55:32.512877   30316 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:55:32.512901   30316 ubuntu.go:177] setting up certificates
	I0728 15:55:32.512910   30316 provision.go:83] configureAuth start
	I0728 15:55:32.512984   30316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.578514   30316 provision.go:138] copyHostCerts
	I0728 15:55:32.578594   30316 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:55:32.578603   30316 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:55:32.578690   30316 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:55:32.578899   30316 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:55:32.578917   30316 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:55:32.578982   30316 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:55:32.579122   30316 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:55:32.579139   30316 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:55:32.579198   30316 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:55:32.579317   30316 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220728155420-12923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220728155420-12923]
	I0728 15:55:32.674959   30316 provision.go:172] copyRemoteCerts
	I0728 15:55:32.675028   30316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:55:32.675080   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.741088   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:32.827517   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:55:32.845806   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0728 15:55:32.863725   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 15:55:32.880989   30316 provision.go:86] duration metric: configureAuth took 368.070725ms
	I0728 15:55:32.881002   30316 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:55:32.881150   30316 config.go:178] Loaded profile config "default-k8s-different-port-20220728155420-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:55:32.881205   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.946636   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:32.946805   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:32.946817   30316 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:55:33.067666   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:55:33.067682   30316 ubuntu.go:71] root file system type: overlay
	I0728 15:55:33.067838   30316 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:55:33.067911   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.131727   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:33.131899   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:33.131963   30316 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:55:33.261631   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:55:33.261710   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.325641   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:33.325808   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:33.325822   30316 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:55:33.449804   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:55:33.449827   30316 machine.go:91] provisioned docker machine in 1.319828847s
	I0728 15:55:33.449836   30316 start.go:307] post-start starting for "default-k8s-different-port-20220728155420-12923" (driver="docker")
	I0728 15:55:33.449845   30316 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:55:33.449924   30316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:55:33.449974   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.513907   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:33.601033   30316 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:55:33.604777   30316 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:55:33.604802   30316 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:55:33.604815   30316 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:55:33.604824   30316 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:55:33.604833   30316 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:55:33.604935   30316 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:55:33.605075   30316 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:55:33.605215   30316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:55:33.611867   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:55:33.627853   30316 start.go:310] post-start completed in 178.00861ms
	I0728 15:55:33.627932   30316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:55:33.627977   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.695005   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:33.783324   30316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:55:33.787801   30316 fix.go:57] fixHost completed within 2.245684947s
	I0728 15:55:33.787824   30316 start.go:82] releasing machines lock for "default-k8s-different-port-20220728155420-12923", held for 2.245723265s
	I0728 15:55:33.787920   30316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.853332   30316 ssh_runner.go:195] Run: systemctl --version
	I0728 15:55:33.853337   30316 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:55:33.853402   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.853416   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.920417   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:33.920583   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:34.004995   30316 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:55:34.200488   30316 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:55:34.200552   30316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:55:34.212709   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:55:34.225673   30316 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:55:34.294929   30316 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:55:34.354645   30316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:55:34.418257   30316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:55:34.646963   30316 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 15:55:34.712035   30316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:55:34.777443   30316 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 15:55:34.786754   30316 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 15:55:34.786821   30316 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 15:55:34.790552   30316 start.go:471] Will wait 60s for crictl version
	I0728 15:55:34.790591   30316 ssh_runner.go:195] Run: sudo crictl version
	I0728 15:55:34.891481   30316 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 15:55:34.891547   30316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:55:34.925151   30316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:55:35.004758   30316 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 15:55:35.004838   30316 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220728155420-12923 dig +short host.docker.internal
	I0728 15:55:35.122418   30316 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:55:35.122524   30316 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:55:35.126670   30316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:55:35.135876   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:35.199963   30316 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:55:35.200048   30316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:55:35.230417   30316 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0728 15:55:35.230434   30316 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:55:35.230509   30316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:55:35.259780   30316 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0728 15:55:35.259801   30316 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:55:35.259876   30316 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:55:35.335289   30316 cni.go:95] Creating CNI manager for ""
	I0728 15:55:35.335301   30316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:55:35.335314   30316 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:55:35.335329   30316 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220728155420-12923 NodeName:default-k8s-different-port-20220728155420-12923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:55:35.335442   30316 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220728155420-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:55:35.335545   30316 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220728155420-12923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728155420-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0728 15:55:35.335608   30316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 15:55:35.342895   30316 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:55:35.342938   30316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:55:35.349853   30316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0728 15:55:35.361884   30316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:55:35.374675   30316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0728 15:55:35.386579   30316 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:55:35.390206   30316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:55:35.399217   30316 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923 for IP: 192.168.67.2
	I0728 15:55:35.399333   30316 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:55:35.399381   30316 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:55:35.399467   30316 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.key
	I0728 15:55:35.399524   30316 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/apiserver.key.c7fa3a9e
	I0728 15:55:35.399597   30316 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/proxy-client.key
	I0728 15:55:35.399795   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:55:35.399835   30316 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:55:35.399850   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:55:35.399884   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:55:35.399915   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:55:35.399943   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:55:35.400003   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:55:35.400535   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:55:35.416976   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 15:55:35.433119   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:55:35.449169   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 15:55:35.465944   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:55:35.482473   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:55:35.499106   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:55:35.515045   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:55:35.531222   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:55:35.548054   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:55:35.564648   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:55:35.581798   30316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:55:35.593965   30316 ssh_runner.go:195] Run: openssl version
	I0728 15:55:35.599527   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:55:35.607305   30316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:55:35.611358   30316 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:55:35.611403   30316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:55:35.616434   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:55:35.623673   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:55:35.631654   30316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:55:35.635527   30316 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:55:35.635576   30316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:55:35.640871   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:55:35.648249   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:55:35.655974   30316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:55:35.660049   30316 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:55:35.660093   30316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:55:35.665541   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:55:35.672843   30316 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220728155420-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728155420-1292
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:55:35.672943   30316 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:55:35.701097   30316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:55:35.708861   30316 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 15:55:35.708875   30316 kubeadm.go:626] restartCluster start
	I0728 15:55:35.708918   30316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 15:55:35.715675   30316 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:35.715731   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:35.792225   30316 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220728155420-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:55:35.792426   30316 kubeconfig.go:127] "default-k8s-different-port-20220728155420-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 15:55:35.792849   30316 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:55:35.794157   30316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 15:55:35.802225   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:35.802292   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:35.810745   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.012529   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.012639   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.023340   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.210864   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.211001   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.220628   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.411346   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.411538   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.421878   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.611205   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.611335   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.621648   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.812972   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.813075   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.823163   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.012867   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.013068   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.023408   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.212893   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.213035   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.223601   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.412922   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.413087   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.423314   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.612969   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.613078   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.623490   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.812353   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.812456   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.823336   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.010830   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.010916   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.019821   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.212896   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.213005   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.223210   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.410813   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.410939   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.420711   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.611654   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.611825   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.622545   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.811275   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.811362   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.821118   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.821127   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.821175   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.830270   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.830283   30316 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 15:55:38.830287   30316 kubeadm.go:1092] stopping kube-system containers ...
	I0728 15:55:38.830345   30316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:55:38.860913   30316 docker.go:443] Stopping containers: [7ec6c6148c24 fa1f81d73248 17fdf8663ffe 2fdb37a458e2 b90b24e0cb1f c98d3d66b2c3 d9963c325023 0204554478fa ab27dfcb31e5 49d7071af640 288680d90206 651e3092e073 ac40e1ec26cd 9ab0cfc84627 c85c92139dea 9a3c98b2d6cb]
	I0728 15:55:38.860989   30316 ssh_runner.go:195] Run: docker stop 7ec6c6148c24 fa1f81d73248 17fdf8663ffe 2fdb37a458e2 b90b24e0cb1f c98d3d66b2c3 d9963c325023 0204554478fa ab27dfcb31e5 49d7071af640 288680d90206 651e3092e073 ac40e1ec26cd 9ab0cfc84627 c85c92139dea 9a3c98b2d6cb
	I0728 15:55:38.890667   30316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 15:55:38.900785   30316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:55:38.907947   30316 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 28 22:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 28 22:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul 28 22:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 28 22:54 /etc/kubernetes/scheduler.conf
	
	I0728 15:55:38.908005   30316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0728 15:55:38.915307   30316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0728 15:55:38.922584   30316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0728 15:55:38.929622   30316 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.929673   30316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 15:55:38.936409   30316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0728 15:55:38.943336   30316 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.943378   30316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 15:55:38.950159   30316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:55:38.957430   30316 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 15:55:38.957440   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:39.001063   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:39.978959   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:40.156999   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:40.205624   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:40.261056   30316 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:55:40.261121   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:55:40.793843   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:55:41.293947   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:55:41.312913   30316 api_server.go:71] duration metric: took 1.051877628s to wait for apiserver process to appear ...
	I0728 15:55:41.312925   30316 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:55:41.312937   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:41.314088   30316 api_server.go:256] stopped: https://127.0.0.1:59515/healthz: Get "https://127.0.0.1:59515/healthz": EOF
	I0728 15:55:41.814307   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:44.109812   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 15:55:44.109835   30316 api_server.go:102] status: https://127.0.0.1:59515/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 15:55:44.314142   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:44.320716   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:55:44.320735   30316 api_server.go:102] status: https://127.0.0.1:59515/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:55:44.814131   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:44.819883   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:55:44.819900   30316 api_server.go:102] status: https://127.0.0.1:59515/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:55:45.316185   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:45.323137   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 200:
	ok
	I0728 15:55:45.329031   30316 api_server.go:140] control plane version: v1.24.3
	I0728 15:55:45.329042   30316 api_server.go:130] duration metric: took 4.016180474s to wait for apiserver health ...
	I0728 15:55:45.329048   30316 cni.go:95] Creating CNI manager for ""
	I0728 15:55:45.329052   30316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:55:45.329064   30316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:55:45.335966   30316 system_pods.go:59] 8 kube-system pods found
	I0728 15:55:45.335982   30316 system_pods.go:61] "coredns-6d4b75cb6d-p47tc" [097a4ddd-127a-4d76-9ef2-b31856680a61] Running
	I0728 15:55:45.335987   30316 system_pods.go:61] "etcd-default-k8s-different-port-20220728155420-12923" [b3af4c0d-6e4a-4de2-94a7-6f0e9804c43e] Running
	I0728 15:55:45.335992   30316 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220728155420-12923" [915ee0d1-3a30-4488-a8fc-a2fd46ff53dc] Running
	I0728 15:55:45.335999   30316 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220728155420-12923" [6697437d-e349-4688-91e7-6755001fc84c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 15:55:45.336004   30316 system_pods.go:61] "kube-proxy-nbrlj" [9c349fd7-0054-4e68-8374-3d4ccfb14b9d] Running
	I0728 15:55:45.336008   30316 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220728155420-12923" [f7033e33-c299-41ac-b929-b557f088bd55] Running
	I0728 15:55:45.336013   30316 system_pods.go:61] "metrics-server-5c6f97fb75-q8trj" [880b41fa-bdc2-4c65-b3c0-05c1487607d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:55:45.336019   30316 system_pods.go:61] "storage-provisioner" [18cb92a5-43f3-4ec9-aa95-d82651b00937] Running
	I0728 15:55:45.336023   30316 system_pods.go:74] duration metric: took 6.955342ms to wait for pod list to return data ...
	I0728 15:55:45.336030   30316 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:55:45.339315   30316 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:55:45.339330   30316 node_conditions.go:123] node cpu capacity is 6
	I0728 15:55:45.339339   30316 node_conditions.go:105] duration metric: took 3.304859ms to run NodePressure ...
	I0728 15:55:45.339355   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:45.458228   30316 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0728 15:55:45.462563   30316 kubeadm.go:777] kubelet initialised
	I0728 15:55:45.462573   30316 kubeadm.go:778] duration metric: took 4.332569ms waiting for restarted kubelet to initialise ...
	I0728 15:55:45.462580   30316 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:55:45.466883   30316 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-p47tc" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.471320   30316 pod_ready.go:92] pod "coredns-6d4b75cb6d-p47tc" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:45.471328   30316 pod_ready.go:81] duration metric: took 4.434775ms waiting for pod "coredns-6d4b75cb6d-p47tc" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.471334   30316 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.475851   30316 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:45.475859   30316 pod_ready.go:81] duration metric: took 4.521291ms waiting for pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.475864   30316 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.480433   30316 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:45.480440   30316 pod_ready.go:81] duration metric: took 4.572466ms waiting for pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.480448   30316 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:47.739926   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:49.740311   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:52.239842   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:54.741106   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:57.241016   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:59.242030   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:59.739349   30316 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:59.739360   30316 pod_ready.go:81] duration metric: took 14.259145114s waiting for pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.739367   30316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nbrlj" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.743616   30316 pod_ready.go:92] pod "kube-proxy-nbrlj" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:59.743625   30316 pod_ready.go:81] duration metric: took 4.252558ms waiting for pod "kube-proxy-nbrlj" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.743630   30316 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.747906   30316 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:59.747914   30316 pod_ready.go:81] duration metric: took 4.279711ms waiting for pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.747923   30316 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace to be "Ready" ...
	I0728 15:56:01.759363   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:04.256978   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:06.261199   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:08.757310   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:10.759539   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:13.256936   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:15.259376   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:17.260510   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:19.756798   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:21.758954   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:24.260530   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:26.758177   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:28.759402   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:31.258712   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:33.259127   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:35.259789   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:37.761287   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:40.257352   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:42.259524   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:44.260773   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:46.758827   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:48.760537   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:51.260493   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:53.756371   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:55.759549   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:58.257034   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:00.259021   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:02.267501   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:04.760386   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:07.257438   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:09.758070   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:12.259544   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:14.757376   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:16.758596   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:19.258413   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:21.258981   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:23.758572   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:26.257652   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:28.759470   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:30.759647   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:33.259035   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:35.756810   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:38.255923   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:40.256087   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:42.257292   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:44.257582   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:46.759216   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:49.256554   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:51.259323   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:53.757697   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:56.259190   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:58.756634   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:00.758211   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:03.256203   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:05.757714   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:08.255918   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:10.757402   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:13.256697   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:15.759116   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:18.254886   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:20.256061   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:22.256139   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:24.758663   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:27.256244   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:29.258507   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:31.755489   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:33.758645   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:36.256430   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:38.757168   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:40.757635   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:43.255776   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:45.255876   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:47.256269   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:49.256281   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:51.756592   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:54.255665   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:56.756610   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:59.255251   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:01.255321   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:03.257227   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:05.756051   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:08.255467   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:10.256148   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:12.755028   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:14.756329   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:16.756937   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:19.253884   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:21.255451   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:23.756009   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:25.757047   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:28.257085   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:30.755577   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:33.255299   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:35.257914   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:37.757572   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:40.256191   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:42.256681   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:44.754921   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:46.755973   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:48.756572   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:50.756662   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:53.253971   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:55.256336   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:57.754071   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:59.748516   30316 pod_ready.go:81] duration metric: took 4m0.004577503s waiting for pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace to be "Ready" ...
	E0728 15:59:59.748542   30316 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0728 15:59:59.748610   30316 pod_ready.go:38] duration metric: took 4m14.290256677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:59:59.748647   30316 kubeadm.go:630] restartCluster took 4m24.044163545s
	W0728 15:59:59.748768   30316 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0728 15:59:59.748795   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0728 16:00:02.104339   30316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.355568477s)
	I0728 16:00:02.104398   30316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 16:00:02.113958   30316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 16:00:02.121438   30316 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 16:00:02.121481   30316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 16:00:02.129052   30316 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 16:00:02.129078   30316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 16:00:02.411869   30316 out.go:204]   - Generating certificates and keys ...
	I0728 16:00:03.414591   30316 out.go:204]   - Booting up control plane ...
	I0728 16:00:10.524898   30316 out.go:204]   - Configuring RBAC rules ...
	I0728 16:00:10.902674   30316 cni.go:95] Creating CNI manager for ""
	I0728 16:00:10.902686   30316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 16:00:10.902703   30316 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 16:00:10.902792   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:10.902802   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551 minikube.k8s.io/name=default-k8s-different-port-20220728155420-12923 minikube.k8s.io/updated_at=2022_07_28T16_00_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:10.913574   30316 ops.go:34] apiserver oom_adj: -16
	I0728 16:00:11.094279   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:11.648292   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:12.148251   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:12.648564   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:13.148181   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:13.648060   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:14.149495   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:14.648613   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:15.148181   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:15.648223   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:16.147977   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:16.648059   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:17.148468   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:17.648414   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:18.148930   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:18.648014   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:19.148979   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:19.649861   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:20.148179   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:20.647929   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:21.148951   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:21.649324   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:22.149231   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:22.648610   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:23.149446   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:23.647962   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:23.790473   30316 kubeadm.go:1045] duration metric: took 12.887965506s to wait for elevateKubeSystemPrivileges.
	I0728 16:00:23.790492   30316 kubeadm.go:397] StartCluster complete in 4m48.122451346s
	I0728 16:00:23.790510   30316 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:00:23.790593   30316 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 16:00:23.791159   30316 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:00:24.307591   30316 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220728155420-12923" rescaled to 1
	I0728 16:00:24.307627   30316 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 16:00:24.307647   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 16:00:24.307667   30316 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0728 16:00:24.307873   30316 config.go:178] Loaded profile config "default-k8s-different-port-20220728155420-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 16:00:24.332263   30316 out.go:177] * Verifying Kubernetes components...
	I0728 16:00:24.332328   30316 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.332327   30316 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.354025   30316 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.354042   30316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 16:00:24.354056   30316 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.332334   30316 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220728155420-12923"
	W0728 16:00:24.354084   30316 addons.go:162] addon storage-provisioner should already be in state true
	I0728 16:00:24.332359   30316 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.354099   30316 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220728155420-12923"
	W0728 16:00:24.354103   30316 addons.go:162] addon metrics-server should already be in state true
	I0728 16:00:24.354085   30316 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220728155420-12923"
	W0728 16:00:24.354129   30316 addons.go:162] addon dashboard should already be in state true
	I0728 16:00:24.354132   30316 host.go:66] Checking if "default-k8s-different-port-20220728155420-12923" exists ...
	I0728 16:00:24.354131   30316 host.go:66] Checking if "default-k8s-different-port-20220728155420-12923" exists ...
	I0728 16:00:24.354155   30316 host.go:66] Checking if "default-k8s-different-port-20220728155420-12923" exists ...
	I0728 16:00:24.354401   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.354528   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.355541   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.355684   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.390460   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.390458   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0728 16:00:24.535958   30316 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0728 16:00:24.520667   30316 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.567843   30316 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220728155420-12923" to be "Ready" ...
	I0728 16:00:24.573082   30316 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0728 16:00:24.573099   30316 addons.go:162] addon default-storageclass should already be in state true
	I0728 16:00:24.635787   30316 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0728 16:00:24.635861   30316 host.go:66] Checking if "default-k8s-different-port-20220728155420-12923" exists ...
	I0728 16:00:24.639025   30316 node_ready.go:49] node "default-k8s-different-port-20220728155420-12923" has status "Ready":"True"
	I0728 16:00:24.657058   30316 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0728 16:00:24.678275   30316 node_ready.go:38] duration metric: took 42.466015ms waiting for node "default-k8s-different-port-20220728155420-12923" to be "Ready" ...
	I0728 16:00:24.737012   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0728 16:00:24.737029   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0728 16:00:24.737022   30316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 16:00:24.678484   30316 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 16:00:24.679284   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.737050   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 16:00:24.715939   30316 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0728 16:00:24.737094   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.737093   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0728 16:00:24.737131   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.737171   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.746547   30316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:24.836630   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 16:00:24.840168   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 16:00:24.841441   30316 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 16:00:24.841451   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 16:00:24.841523   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.841520   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 16:00:24.910866   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 16:00:24.997852   30316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 16:00:25.003279   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0728 16:00:25.003293   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0728 16:00:25.093643   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0728 16:00:25.093667   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0728 16:00:25.113432   30316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 16:00:25.118821   30316 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0728 16:00:25.118833   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0728 16:00:25.202782   30316 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0728 16:00:25.202797   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0728 16:00:25.207372   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0728 16:00:25.207387   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0728 16:00:25.283674   30316 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 16:00:25.283692   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0728 16:00:25.299996   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0728 16:00:25.300010   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0728 16:00:25.401995   30316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 16:00:25.411240   30316 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.020728929s)
	I0728 16:00:25.411263   30316 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0728 16:00:25.482536   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0728 16:00:25.482550   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0728 16:00:25.502802   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0728 16:00:25.502823   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0728 16:00:25.586073   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0728 16:00:25.586092   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0728 16:00:25.689008   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0728 16:00:25.689025   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0728 16:00:25.705569   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 16:00:25.705582   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0728 16:00:25.723569   30316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 16:00:25.991093   30316 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:26.705744   30316 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0728 16:00:26.762993   30316 addons.go:414] enableAddons completed in 2.455353871s
	I0728 16:00:26.767417   30316 pod_ready.go:102] pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace has status "Ready":"False"
	I0728 16:00:29.265572   30316 pod_ready.go:102] pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace has status "Ready":"False"
	I0728 16:00:31.779396   30316 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-nmb74" not found
	I0728 16:00:31.779412   30316 pod_ready.go:81] duration metric: took 7.03295771s waiting for pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace to be "Ready" ...
	E0728 16:00:31.779420   30316 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-nmb74" not found
	I0728 16:00:31.779425   30316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vm6w7" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.784329   30316 pod_ready.go:92] pod "coredns-6d4b75cb6d-vm6w7" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.784338   30316 pod_ready.go:81] duration metric: took 4.908258ms waiting for pod "coredns-6d4b75cb6d-vm6w7" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.784346   30316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.788691   30316 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.788700   30316 pod_ready.go:81] duration metric: took 4.337288ms waiting for pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.788706   30316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.793360   30316 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.793369   30316 pod_ready.go:81] duration metric: took 4.65919ms waiting for pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.793376   30316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.798075   30316 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.798084   30316 pod_ready.go:81] duration metric: took 4.703602ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.798090   30316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pv62j" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.978720   30316 pod_ready.go:92] pod "kube-proxy-pv62j" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.978732   30316 pod_ready.go:81] duration metric: took 180.639115ms waiting for pod "kube-proxy-pv62j" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.978741   30316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:32.383133   30316 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:32.383144   30316 pod_ready.go:81] duration metric: took 404.402924ms waiting for pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:32.383151   30316 pod_ready.go:38] duration metric: took 7.646241882s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 16:00:32.383168   30316 api_server.go:51] waiting for apiserver process to appear ...
	I0728 16:00:32.383219   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:00:32.394067   30316 api_server.go:71] duration metric: took 8.086551332s to wait for apiserver process to appear ...
	I0728 16:00:32.394085   30316 api_server.go:87] waiting for apiserver healthz status ...
	I0728 16:00:32.394093   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 16:00:32.399509   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 200:
	ok
	I0728 16:00:32.400740   30316 api_server.go:140] control plane version: v1.24.3
	I0728 16:00:32.400749   30316 api_server.go:130] duration metric: took 6.659375ms to wait for apiserver health ...
	I0728 16:00:32.400754   30316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 16:00:32.580287   30316 system_pods.go:59] 8 kube-system pods found
	I0728 16:00:32.580302   30316 system_pods.go:61] "coredns-6d4b75cb6d-vm6w7" [00979eea-4984-40c4-9975-4e9ef9c33a1f] Running
	I0728 16:00:32.580306   30316 system_pods.go:61] "etcd-default-k8s-different-port-20220728155420-12923" [8acc6fb5-6fbb-4eb7-ad74-bc24bde492ae] Running
	I0728 16:00:32.580310   30316 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220728155420-12923" [1d24c376-e9f9-4dde-bf95-9c7f4a5ff6de] Running
	I0728 16:00:32.580314   30316 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220728155420-12923" [8fa6e4db-daea-4dcc-a3d3-ad78a001c2e7] Running
	I0728 16:00:32.580317   30316 system_pods.go:61] "kube-proxy-pv62j" [25bad633-52dd-438c-ade8-4b59d566d336] Running
	I0728 16:00:32.580321   30316 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220728155420-12923" [9243a062-f92f-4f4c-8388-a47f60b2b439] Running
	I0728 16:00:32.580329   30316 system_pods.go:61] "metrics-server-5c6f97fb75-58sqm" [55c181c1-c5da-4dbb-8b61-2522d22261f4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 16:00:32.580343   30316 system_pods.go:61] "storage-provisioner" [f0b7ea65-aadc-49a7-8498-f541759e61a9] Running
	I0728 16:00:32.580348   30316 system_pods.go:74] duration metric: took 179.593374ms to wait for pod list to return data ...
	I0728 16:00:32.580353   30316 default_sa.go:34] waiting for default service account to be created ...
	I0728 16:00:32.760390   30316 default_sa.go:45] found service account: "default"
	I0728 16:00:32.760401   30316 default_sa.go:55] duration metric: took 180.047618ms for default service account to be created ...
	I0728 16:00:32.760406   30316 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 16:00:32.965098   30316 system_pods.go:86] 8 kube-system pods found
	I0728 16:00:32.965112   30316 system_pods.go:89] "coredns-6d4b75cb6d-vm6w7" [00979eea-4984-40c4-9975-4e9ef9c33a1f] Running
	I0728 16:00:32.965116   30316 system_pods.go:89] "etcd-default-k8s-different-port-20220728155420-12923" [8acc6fb5-6fbb-4eb7-ad74-bc24bde492ae] Running
	I0728 16:00:32.965120   30316 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220728155420-12923" [1d24c376-e9f9-4dde-bf95-9c7f4a5ff6de] Running
	I0728 16:00:32.965124   30316 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220728155420-12923" [8fa6e4db-daea-4dcc-a3d3-ad78a001c2e7] Running
	I0728 16:00:32.965127   30316 system_pods.go:89] "kube-proxy-pv62j" [25bad633-52dd-438c-ade8-4b59d566d336] Running
	I0728 16:00:32.965132   30316 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220728155420-12923" [9243a062-f92f-4f4c-8388-a47f60b2b439] Running
	I0728 16:00:32.965138   30316 system_pods.go:89] "metrics-server-5c6f97fb75-58sqm" [55c181c1-c5da-4dbb-8b61-2522d22261f4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 16:00:32.965143   30316 system_pods.go:89] "storage-provisioner" [f0b7ea65-aadc-49a7-8498-f541759e61a9] Running
	I0728 16:00:32.965147   30316 system_pods.go:126] duration metric: took 204.741682ms to wait for k8s-apps to be running ...
	I0728 16:00:32.965153   30316 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 16:00:32.965202   30316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 16:00:32.986222   30316 system_svc.go:56] duration metric: took 21.063891ms WaitForService to wait for kubelet.
	I0728 16:00:32.986237   30316 kubeadm.go:572] duration metric: took 8.678736077s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0728 16:00:32.986251   30316 node_conditions.go:102] verifying NodePressure condition ...
	I0728 16:00:33.178411   30316 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 16:00:33.178425   30316 node_conditions.go:123] node cpu capacity is 6
	I0728 16:00:33.178437   30316 node_conditions.go:105] duration metric: took 192.177388ms to run NodePressure ...
	I0728 16:00:33.178453   30316 start.go:216] waiting for startup goroutines ...
	I0728 16:00:33.214457   30316 start.go:506] kubectl: 1.24.2, cluster: 1.24.3 (minor skew: 0)
	I0728 16:00:33.238006   30316 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220728155420-12923" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 22:43:51 UTC, end at Thu 2022-07-28 23:01:33 UTC. --
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[130]: time="2022-07-28T22:43:53.942690810Z" level=info msg="Processing signal 'terminated'"
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[130]: time="2022-07-28T22:43:53.943578596Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[130]: time="2022-07-28T22:43:53.944089211Z" level=info msg="Daemon shutdown complete"
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[130]: time="2022-07-28T22:43:53.944161741Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 systemd[1]: docker.service: Succeeded.
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 systemd[1]: Stopped Docker Application Container Engine.
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 systemd[1]: Starting Docker Application Container Engine...
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.996727993Z" level=info msg="Starting up"
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.998837785Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.998874628Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.998894523Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.998901889Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.999936587Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.999985502Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.999998161Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.000004378Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.003470166Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.008187672Z" level=info msg="Loading containers: start."
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.081875363Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.110811702Z" level=info msg="Loading containers: done."
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.118880813Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.118939961Z" level=info msg="Daemon has completed initialization"
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 systemd[1]: Started Docker Application Container Engine.
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.140764725Z" level=info msg="API listen on [::]:2376"
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.143233308Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-07-28T23:01:35Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:01:35 up  1:22,  0 users,  load average: 0.62, 0.97, 1.04
	Linux old-k8s-version-20220728153807-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 22:43:51 UTC, end at Thu 2022-07-28 23:01:35 UTC. --
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 kubelet[24484]: I0728 23:01:34.823603   24484 server.go:410] Version: v1.16.0
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 kubelet[24484]: I0728 23:01:34.823871   24484 plugins.go:100] No cloud provider specified.
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 kubelet[24484]: I0728 23:01:34.823882   24484 server.go:773] Client rotation is on, will bootstrap in background
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 kubelet[24484]: I0728 23:01:34.825788   24484 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 kubelet[24484]: W0728 23:01:34.826461   24484 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 kubelet[24484]: W0728 23:01:34.826536   24484 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 kubelet[24484]: F0728 23:01:34.826563   24484 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 28 23:01:34 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 28 23:01:35 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Jul 28 23:01:35 old-k8s-version-20220728153807-12923 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 28 23:01:35 old-k8s-version-20220728153807-12923 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 28 23:01:35 old-k8s-version-20220728153807-12923 kubelet[24519]: I0728 23:01:35.577125   24519 server.go:410] Version: v1.16.0
	Jul 28 23:01:35 old-k8s-version-20220728153807-12923 kubelet[24519]: I0728 23:01:35.577449   24519 plugins.go:100] No cloud provider specified.
	Jul 28 23:01:35 old-k8s-version-20220728153807-12923 kubelet[24519]: I0728 23:01:35.577477   24519 server.go:773] Client rotation is on, will bootstrap in background
	Jul 28 23:01:35 old-k8s-version-20220728153807-12923 kubelet[24519]: I0728 23:01:35.579233   24519 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 28 23:01:35 old-k8s-version-20220728153807-12923 kubelet[24519]: W0728 23:01:35.580768   24519 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 28 23:01:35 old-k8s-version-20220728153807-12923 kubelet[24519]: W0728 23:01:35.581031   24519 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 28 23:01:35 old-k8s-version-20220728153807-12923 kubelet[24519]: F0728 23:01:35.581265   24519 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 28 23:01:35 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 28 23:01:35 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 16:01:35.369306   30960 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 2 (434.37725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220728153807-12923" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (43.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220728154707-12923 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923
E0728 15:53:36.595213   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:53:37.025038   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923: exit status 2 (16.080569788s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923: exit status 2 (16.080665808s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220728154707-12923 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923
E0728 15:54:07.114452   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220728154707-12923
helpers_test.go:235: (dbg) docker inspect embed-certs-20220728154707-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c8903a71c1cd8f52218490628108c4a9ed7d61f5462af3eb2947b10273b7e2f7",
	        "Created": "2022-07-28T22:47:13.928680238Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 267797,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:48:18.01746171Z",
	            "FinishedAt": "2022-07-28T22:48:16.073367691Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/c8903a71c1cd8f52218490628108c4a9ed7d61f5462af3eb2947b10273b7e2f7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c8903a71c1cd8f52218490628108c4a9ed7d61f5462af3eb2947b10273b7e2f7/hostname",
	        "HostsPath": "/var/lib/docker/containers/c8903a71c1cd8f52218490628108c4a9ed7d61f5462af3eb2947b10273b7e2f7/hosts",
	        "LogPath": "/var/lib/docker/containers/c8903a71c1cd8f52218490628108c4a9ed7d61f5462af3eb2947b10273b7e2f7/c8903a71c1cd8f52218490628108c4a9ed7d61f5462af3eb2947b10273b7e2f7-json.log",
	        "Name": "/embed-certs-20220728154707-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220728154707-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220728154707-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/663997aeeae59a48d23373ab7860a74585cc1ef039d3f4f056fdf93dafa294ec-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/663997aeeae59a48d23373ab7860a74585cc1ef039d3f4f056fdf93dafa294ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/663997aeeae59a48d23373ab7860a74585cc1ef039d3f4f056fdf93dafa294ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/663997aeeae59a48d23373ab7860a74585cc1ef039d3f4f056fdf93dafa294ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220728154707-12923",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220728154707-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220728154707-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220728154707-12923",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220728154707-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d12fbc03db54d75685e0fd309090c0db0f4afea6a9a4ce714c86ccea24f59d6b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59130"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59131"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59132"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59133"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d12fbc03db54",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220728154707-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c8903a71c1cd",
	                        "embed-certs-20220728154707-12923"
	                    ],
	                    "NetworkID": "a5e0757ffa1aeaefcad3caee47c45090e6ace6f23ba00064b8d9c81124c9d655",
	                    "EndpointID": "5369407a8ce361251bd52c07665c2df50a11be5bc36e51f2d0f029bf3419fd36",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220728154707-12923 logs -n 25
E0728 15:54:10.581836   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220728154707-12923 logs -n 25: (2.906537344s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |               Profile                |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|--------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | old-k8s-version-20220728153807-12923 | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                      |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                      |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                      |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                      |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                      |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220728152330-12923         | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT | 28 Jul 22 15:38 PDT |
	|         | kubenet-20220728152330-12923                      |                                      |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                      |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220728152330-12923         | jenkins | v1.26.0 | 28 Jul 22 15:39 PDT | 28 Jul 22 15:39 PDT |
	|         | kubenet-20220728152330-12923                      |                                      |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:39 PDT | 28 Jul 22 15:40 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                      |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:40 PDT | 28 Jul 22 15:40 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                      |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                      |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:40 PDT | 28 Jul 22 15:41 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                      |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:41 PDT | 28 Jul 22 15:41 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                      |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:41 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                      |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220728153807-12923 | jenkins | v1.26.0 | 28 Jul 22 15:42 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                      |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                      |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                      |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220728153807-12923 | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT | 28 Jul 22 15:43 PDT |
	|         | old-k8s-version-20220728153807-12923              |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                      |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220728153807-12923 | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT | 28 Jul 22 15:43 PDT |
	|         | old-k8s-version-20220728153807-12923              |                                      |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                      |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220728153807-12923 | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                      |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                      |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                      |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                      |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                      |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                      |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                      |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                      |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                      |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                      |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                      |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                      |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                      |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	|---------|---------------------------------------------------|--------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 15:48:16
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 15:48:16.801497   29417 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:48:16.801653   29417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:48:16.801659   29417 out.go:309] Setting ErrFile to fd 2...
	I0728 15:48:16.801663   29417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:48:16.801763   29417 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:48:16.802276   29417 out.go:303] Setting JSON to false
	I0728 15:48:16.817752   29417 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9538,"bootTime":1659038958,"procs":351,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:48:16.817842   29417 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:48:16.838782   29417 out.go:177] * [embed-certs-20220728154707-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:48:16.880995   29417 notify.go:193] Checking for updates...
	I0728 15:48:16.901706   29417 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:48:16.927978   29417 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:48:16.949282   29417 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:48:16.971082   29417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:48:16.992933   29417 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:48:17.014719   29417 config.go:178] Loaded profile config "embed-certs-20220728154707-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:48:17.015366   29417 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:48:17.082982   29417 docker.go:137] docker version: linux-20.10.17
	I0728 15:48:17.083098   29417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:48:17.212741   29417 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:48:17.139329135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:48:17.234803   29417 out.go:177] * Using the docker driver based on existing profile
	I0728 15:48:17.256309   29417 start.go:284] selected driver: docker
	I0728 15:48:17.256336   29417 start.go:808] validating driver "docker" against &{Name:embed-certs-20220728154707-12923 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220728154707-12923 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:48:17.256500   29417 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:48:17.259510   29417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:48:17.389758   29417 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:48:17.31689262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:48:17.389932   29417 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:48:17.389950   29417 cni.go:95] Creating CNI manager for ""
	I0728 15:48:17.389959   29417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:48:17.389967   29417 start_flags.go:310] config:
	{Name:embed-certs-20220728154707-12923 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220728154707-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:48:17.433599   29417 out.go:177] * Starting control plane node embed-certs-20220728154707-12923 in cluster embed-certs-20220728154707-12923
	I0728 15:48:17.455669   29417 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:48:17.477751   29417 out.go:177] * Pulling base image ...
	I0728 15:48:17.519730   29417 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:48:17.519790   29417 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:48:17.519811   29417 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 15:48:17.519834   29417 cache.go:57] Caching tarball of preloaded images
	I0728 15:48:17.520022   29417 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:48:17.520061   29417 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 15:48:17.521034   29417 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/config.json ...
	I0728 15:48:17.586890   29417 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:48:17.586906   29417 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:48:17.586916   29417 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:48:17.586967   29417 start.go:370] acquiring machines lock for embed-certs-20220728154707-12923: {Name:mkafc927efa8de6adf00771129c22ebc3d05578e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:48:17.587045   29417 start.go:374] acquired machines lock for "embed-certs-20220728154707-12923" in 61.043µs
	I0728 15:48:17.587065   29417 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:48:17.587073   29417 fix.go:55] fixHost starting: 
	I0728 15:48:17.587306   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:48:17.651525   29417 fix.go:103] recreateIfNeeded on embed-certs-20220728154707-12923: state=Stopped err=<nil>
	W0728 15:48:17.651560   29417 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:48:17.673382   29417 out.go:177] * Restarting existing docker container for "embed-certs-20220728154707-12923" ...
	I0728 15:48:17.694189   29417 cli_runner.go:164] Run: docker start embed-certs-20220728154707-12923
	I0728 15:48:18.024290   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:48:18.089501   29417 kic.go:415] container "embed-certs-20220728154707-12923" state is running.
	I0728 15:48:18.090096   29417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220728154707-12923
	I0728 15:48:18.159136   29417 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/config.json ...
	I0728 15:48:18.159526   29417 machine.go:88] provisioning docker machine ...
	I0728 15:48:18.159550   29417 ubuntu.go:169] provisioning hostname "embed-certs-20220728154707-12923"
	I0728 15:48:18.159621   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:18.226433   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:18.226656   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:18.226667   29417 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220728154707-12923 && echo "embed-certs-20220728154707-12923" | sudo tee /etc/hostname
	I0728 15:48:18.356370   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220728154707-12923
	
	I0728 15:48:18.356474   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:18.422345   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:18.422505   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:18.422519   29417 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220728154707-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220728154707-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220728154707-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:48:18.541839   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:48:18.541862   29417 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:48:18.541893   29417 ubuntu.go:177] setting up certificates
	I0728 15:48:18.541906   29417 provision.go:83] configureAuth start
	I0728 15:48:18.541981   29417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220728154707-12923
	I0728 15:48:18.607419   29417 provision.go:138] copyHostCerts
	I0728 15:48:18.607510   29417 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:48:18.607520   29417 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:48:18.607611   29417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:48:18.607810   29417 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:48:18.607820   29417 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:48:18.607885   29417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:48:18.608037   29417 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:48:18.608043   29417 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:48:18.608100   29417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:48:18.608266   29417 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220728154707-12923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220728154707-12923]
	I0728 15:48:18.782136   29417 provision.go:172] copyRemoteCerts
	I0728 15:48:18.782203   29417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:48:18.782257   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:18.846797   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:18.934758   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:48:18.952068   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0728 15:48:18.968832   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 15:48:18.984908   29417 provision.go:86] duration metric: configureAuth took 442.991274ms
	I0728 15:48:18.984926   29417 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:48:18.985086   29417 config.go:178] Loaded profile config "embed-certs-20220728154707-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:48:18.985144   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.049284   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:19.049430   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:19.049439   29417 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:48:19.169479   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:48:19.169492   29417 ubuntu.go:71] root file system type: overlay
	I0728 15:48:19.169633   29417 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:48:19.169705   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.234334   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:19.234474   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:19.234535   29417 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:48:19.363314   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:48:19.363389   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.474028   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:19.474168   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:19.474181   29417 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:48:19.600015   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:48:19.600035   29417 machine.go:91] provisioned docker machine in 1.440525827s
	I0728 15:48:19.600045   29417 start.go:307] post-start starting for "embed-certs-20220728154707-12923" (driver="docker")
	I0728 15:48:19.600050   29417 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:48:19.600116   29417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:48:19.600159   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.664230   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:19.753490   29417 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:48:19.756748   29417 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:48:19.756763   29417 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:48:19.756771   29417 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:48:19.756775   29417 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:48:19.756786   29417 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:48:19.756892   29417 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:48:19.757034   29417 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:48:19.757184   29417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:48:19.764556   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:48:19.780836   29417 start.go:310] post-start completed in 180.786416ms
	I0728 15:48:19.780924   29417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:48:19.780983   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.845831   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:19.931331   29417 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:48:19.935737   29417 fix.go:57] fixHost completed within 2.348703259s
	I0728 15:48:19.935747   29417 start.go:82] releasing machines lock for "embed-certs-20220728154707-12923", held for 2.348733925s
	I0728 15:48:19.935815   29417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220728154707-12923
	I0728 15:48:19.999291   29417 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:48:19.999293   29417 ssh_runner.go:195] Run: systemctl --version
	I0728 15:48:19.999351   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.999350   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:20.066602   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:20.066639   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:20.336780   29417 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:48:20.345950   29417 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:48:20.346003   29417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:48:20.357290   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:48:20.369623   29417 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:48:20.444639   29417 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:48:20.511294   29417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:48:20.570419   29417 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:48:20.816253   29417 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 15:48:20.888550   29417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:48:20.958099   29417 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 15:48:20.967785   29417 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 15:48:20.967856   29417 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 15:48:20.971649   29417 start.go:471] Will wait 60s for crictl version
	I0728 15:48:20.971697   29417 ssh_runner.go:195] Run: sudo crictl version
	I0728 15:48:21.074684   29417 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 15:48:21.074748   29417 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:48:21.109656   29417 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:48:21.190920   29417 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 15:48:21.191106   29417 cli_runner.go:164] Run: docker exec -t embed-certs-20220728154707-12923 dig +short host.docker.internal
	I0728 15:48:21.315815   29417 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:48:21.315918   29417 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:48:21.320085   29417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:48:21.329958   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:21.394974   29417 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:48:21.395041   29417 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:48:21.425612   29417 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0728 15:48:21.425628   29417 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:48:21.425703   29417 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:48:21.455867   29417 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0728 15:48:21.455887   29417 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:48:21.455976   29417 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:48:21.528212   29417 cni.go:95] Creating CNI manager for ""
	I0728 15:48:21.528224   29417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:48:21.528238   29417 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:48:21.528250   29417 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220728154707-12923 NodeName:embed-certs-20220728154707-12923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:48:21.528359   29417 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220728154707-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:48:21.528439   29417 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220728154707-12923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220728154707-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:48:21.528500   29417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 15:48:21.536494   29417 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:48:21.536551   29417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:48:21.543737   29417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0728 15:48:21.556138   29417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:48:21.568510   29417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0728 15:48:21.580344   29417 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:48:21.583879   29417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:48:21.593400   29417 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923 for IP: 192.168.67.2
	I0728 15:48:21.593521   29417 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:48:21.593573   29417 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:48:21.593648   29417 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/client.key
	I0728 15:48:21.593716   29417 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/apiserver.key.c7fa3a9e
	I0728 15:48:21.593765   29417 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/proxy-client.key
	I0728 15:48:21.593961   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:48:21.593997   29417 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:48:21.594013   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:48:21.594046   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:48:21.594075   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:48:21.594102   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:48:21.594168   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:48:21.594672   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:48:21.611272   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 15:48:21.627747   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:48:21.644521   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 15:48:21.661383   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:48:21.677553   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:48:21.693614   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:48:21.710372   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:48:21.727241   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:48:21.743529   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:48:21.760108   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:48:21.776707   29417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:48:21.790531   29417 ssh_runner.go:195] Run: openssl version
	I0728 15:48:21.796365   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:48:21.803971   29417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:48:21.829542   29417 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:48:21.829588   29417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:48:21.834653   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:48:21.841901   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:48:21.849320   29417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:48:21.853199   29417 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:48:21.853254   29417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:48:21.858491   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:48:21.865400   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:48:21.872773   29417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:48:21.876616   29417 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:48:21.876657   29417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:48:21.881915   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:48:21.888912   29417 kubeadm.go:395] StartCluster: {Name:embed-certs-20220728154707-12923 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220728154707-12923 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:48:21.889007   29417 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:48:21.917473   29417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:48:21.925062   29417 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 15:48:21.925078   29417 kubeadm.go:626] restartCluster start
	I0728 15:48:21.925120   29417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 15:48:21.931678   29417 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:21.931736   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:21.995670   29417 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220728154707-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:48:21.995837   29417 kubeconfig.go:127] "embed-certs-20220728154707-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 15:48:21.996174   29417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:48:21.997302   29417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 15:48:22.004907   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.004963   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.013141   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:22.215267   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.215476   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.226779   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:22.415295   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.415493   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.426659   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:22.614923   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.615024   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.625988   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:22.813546   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.813626   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.822993   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.015317   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.015544   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.025792   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.215277   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.215416   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.226602   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.413566   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.413660   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.424136   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.613588   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.613654   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.622662   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.814664   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.814775   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.825692   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.015300   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.015423   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.026137   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.215292   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.215465   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.226205   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.413410   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.413583   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.423694   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.614714   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.614873   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.625456   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.814430   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.814534   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.825126   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.013304   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:25.013447   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:25.024258   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.024268   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:25.024313   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:25.032119   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.032130   29417 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 15:48:25.032138   29417 kubeadm.go:1092] stopping kube-system containers ...
	I0728 15:48:25.032191   29417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:48:25.061839   29417 docker.go:443] Stopping containers: [29457d4df313 6ef1cce588df 7d5bc5e389db 8302d6c6443f cdfeb1ef5523 0a683159e91c 25acc0420e55 d443c52be75c a610b508b031 06e6110a99f1 8e2b7312dd3c 7c97803e2ece 4ef5ce52c329 2f78d4daa24c 6855ea1ad5cf f6b3a99aae08]
	I0728 15:48:25.061917   29417 ssh_runner.go:195] Run: docker stop 29457d4df313 6ef1cce588df 7d5bc5e389db 8302d6c6443f cdfeb1ef5523 0a683159e91c 25acc0420e55 d443c52be75c a610b508b031 06e6110a99f1 8e2b7312dd3c 7c97803e2ece 4ef5ce52c329 2f78d4daa24c 6855ea1ad5cf f6b3a99aae08
	I0728 15:48:25.098171   29417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 15:48:25.111390   29417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:48:25.119798   29417 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 28 22:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 28 22:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jul 28 22:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 28 22:47 /etc/kubernetes/scheduler.conf
	
	I0728 15:48:25.119853   29417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 15:48:25.127053   29417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 15:48:25.134998   29417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 15:48:25.142975   29417 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.143035   29417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 15:48:25.150202   29417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 15:48:25.157246   29417 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.157293   29417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 15:48:25.164016   29417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:48:25.171050   29417 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 15:48:25.171059   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:25.216093   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:25.844343   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:26.021662   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:26.072258   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:26.134783   29417 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:48:26.134841   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:48:26.644326   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:48:27.144282   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:48:27.154756   29417 api_server.go:71] duration metric: took 1.019990602s to wait for apiserver process to appear ...
	I0728 15:48:27.154775   29417 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:48:27.154789   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:27.155957   29417 api_server.go:256] stopped: https://127.0.0.1:59133/healthz: Get "https://127.0.0.1:59133/healthz": EOF
	I0728 15:48:27.657022   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:30.452815   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 15:48:30.452848   29417 api_server.go:102] status: https://127.0.0.1:59133/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 15:48:30.657052   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:30.662787   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:48:30.662799   29417 api_server.go:102] status: https://127.0.0.1:59133/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:48:31.156224   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:31.161953   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:48:31.161970   29417 api_server.go:102] status: https://127.0.0.1:59133/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:48:31.656203   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:31.664096   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 200:
	ok
	I0728 15:48:31.670530   29417 api_server.go:140] control plane version: v1.24.3
	I0728 15:48:31.670542   29417 api_server.go:130] duration metric: took 4.515837693s to wait for apiserver health ...
	I0728 15:48:31.670547   29417 cni.go:95] Creating CNI manager for ""
	I0728 15:48:31.670552   29417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:48:31.670562   29417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:48:31.676716   29417 system_pods.go:59] 8 kube-system pods found
	I0728 15:48:31.676732   29417 system_pods.go:61] "coredns-6d4b75cb6d-sz4ss" [b1735e46-67cb-4a2a-9a12-260c98968b3a] Running
	I0728 15:48:31.676746   29417 system_pods.go:61] "etcd-embed-certs-20220728154707-12923" [a389e720-76d6-499e-b34e-3f8013bce707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 15:48:31.676751   29417 system_pods.go:61] "kube-apiserver-embed-certs-20220728154707-12923" [b5ea0f7a-5c6d-41c4-8e7c-f86e90a70222] Running
	I0728 15:48:31.676757   29417 system_pods.go:61] "kube-controller-manager-embed-certs-20220728154707-12923" [9c192195-2527-4438-aa9b-bffc0aebccd1] Running
	I0728 15:48:31.676760   29417 system_pods.go:61] "kube-proxy-hhj48" [11442494-68d4-468e-b506-0302c7692a8d] Running
	I0728 15:48:31.676765   29417 system_pods.go:61] "kube-scheduler-embed-certs-20220728154707-12923" [37f5c49d-8386-4440-ba01-f9d4a3eb7d05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 15:48:31.676772   29417 system_pods.go:61] "metrics-server-5c6f97fb75-b525p" [1aad746e-e8e5-44ae-a006-2655a20b240b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:48:31.676777   29417 system_pods.go:61] "storage-provisioner" [0e4110a8-11ee-4fe9-8b5e-6874c4466099] Running
	I0728 15:48:31.676780   29417 system_pods.go:74] duration metric: took 6.214305ms to wait for pod list to return data ...
	I0728 15:48:31.676788   29417 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:48:31.679407   29417 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:48:31.679420   29417 node_conditions.go:123] node cpu capacity is 6
	I0728 15:48:31.679429   29417 node_conditions.go:105] duration metric: took 2.637683ms to run NodePressure ...
	I0728 15:48:31.679439   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:31.806182   29417 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0728 15:48:31.829087   29417 kubeadm.go:777] kubelet initialised
	I0728 15:48:31.829099   29417 kubeadm.go:778] duration metric: took 4.724763ms waiting for restarted kubelet to initialise ...
	I0728 15:48:31.829106   29417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:48:31.835265   29417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-sz4ss" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:31.839853   29417 pod_ready.go:92] pod "coredns-6d4b75cb6d-sz4ss" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:31.839863   29417 pod_ready.go:81] duration metric: took 4.583461ms waiting for pod "coredns-6d4b75cb6d-sz4ss" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:31.839869   29417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:33.853326   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:36.349932   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:38.351762   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:40.850107   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:42.852862   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:44.852267   29417 pod_ready.go:92] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:44.852279   29417 pod_ready.go:81] duration metric: took 13.012622945s waiting for pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:44.852286   29417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:44.857375   29417 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:44.857384   29417 pod_ready.go:81] duration metric: took 5.09412ms waiting for pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:44.857392   29417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:46.867189   29417 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:47.367656   29417 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:47.367668   29417 pod_ready.go:81] duration metric: took 2.510313483s waiting for pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.367678   29417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hhj48" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.371404   29417 pod_ready.go:92] pod "kube-proxy-hhj48" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:47.371412   29417 pod_ready.go:81] duration metric: took 3.727786ms waiting for pod "kube-proxy-hhj48" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.371420   29417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.375602   29417 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:47.375609   29417 pod_ready.go:81] duration metric: took 4.18498ms waiting for pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.375616   29417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:49.387717   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:51.885666   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:53.886054   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:56.384883   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:58.388307   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:00.887815   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:03.384496   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:05.387715   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:07.885515   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:09.885979   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:11.887797   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:14.388045   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:16.885529   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:18.885542   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:20.887317   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:23.386722   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:25.387475   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:27.885734   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:29.887218   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:32.385298   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:34.884952   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:36.886061   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:38.887042   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:41.384351   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:43.385883   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:45.885734   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:48.387488   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:50.888333   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:53.387254   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:55.883448   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	W0728 15:49:56.989217   28750 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0728 15:49:56.989249   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0728 15:49:57.413331   28750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:49:57.423140   28750 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:49:57.423194   28750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:49:57.430740   28750 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:49:57.430758   28750 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:49:58.137590   28750 out.go:204]   - Generating certificates and keys ...
	I0728 15:49:58.961107   28750 out.go:204]   - Booting up control plane ...
	I0728 15:49:57.884245   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:59.887006   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:02.387156   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:04.886992   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:07.387540   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:09.884586   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:11.887663   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:14.385348   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:16.386759   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:18.885993   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:21.386518   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:23.883548   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:25.883909   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:27.885135   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:29.886340   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:32.385238   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:34.885505   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:36.885858   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:39.385952   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:41.386558   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:43.884916   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:46.385722   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:48.885851   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:51.386119   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:53.885644   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:56.384861   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:58.385124   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:00.385353   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:02.884249   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:04.885728   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:07.385528   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:09.885039   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:11.885545   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:14.385339   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:16.885351   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:18.887968   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:21.382353   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:23.385169   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:25.885193   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:28.385639   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:30.885396   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:33.384905   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:35.884757   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:38.382709   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:40.385773   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:42.883260   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:45.384114   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:47.883140   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:49.883283   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:53.880636   28750 kubeadm.go:397] StartCluster complete in 7m58.936593061s
	I0728 15:51:53.880713   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:51:53.913209   28750 logs.go:274] 0 containers: []
	W0728 15:51:53.913222   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:51:53.913282   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:51:53.943409   28750 logs.go:274] 0 containers: []
	W0728 15:51:53.943421   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:51:53.943481   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:51:53.973451   28750 logs.go:274] 0 containers: []
	W0728 15:51:53.973463   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:51:53.973516   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:51:54.002910   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.002922   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:51:54.002981   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:51:54.035653   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.035665   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:51:54.035724   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:51:54.068593   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.068606   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:51:54.068668   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:51:54.098273   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.098285   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:51:54.098344   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:51:54.127232   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.127244   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:51:54.127252   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:51:54.127259   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:51:56.179496   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052257174s)
	I0728 15:51:56.179636   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:51:56.179644   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:51:56.220729   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:51:56.220744   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:51:56.232226   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:51:56.232240   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:51:56.289365   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:51:56.289376   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:51:56.289383   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0728 15:51:56.303663   28750 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0728 15:51:56.303683   28750 out.go:239] * 
	W0728 15:51:56.303804   28750 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:51:56.303823   28750 out.go:239] * 
	W0728 15:51:56.304345   28750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 15:51:56.399661   28750 out.go:177] 
	W0728 15:51:56.441944   28750 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:51:56.442070   28750 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0728 15:51:56.442149   28750 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0728 15:51:56.483570   28750 out.go:177] 
	I0728 15:51:51.885062   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:54.384742   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:56.405584   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:58.885045   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:01.382557   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:03.384708   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:05.883834   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:08.383440   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:10.884370   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:13.384787   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:15.882789   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:17.883026   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:19.884572   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:22.381729   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:24.385454   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:26.884654   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:29.384420   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:31.883692   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:34.382130   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:36.383023   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:38.885843   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:41.383884   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:43.880996   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:45.882240   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:47.375364   29417 pod_ready.go:81] duration metric: took 4m0.003710612s waiting for pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace to be "Ready" ...
	E0728 15:52:47.375409   29417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0728 15:52:47.375443   29417 pod_ready.go:38] duration metric: took 4m15.550586171s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:52:47.375472   29417 kubeadm.go:630] restartCluster took 4m25.45481034s
	W0728 15:52:47.375941   29417 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0728 15:52:47.375998   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0728 15:52:49.817713   29417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.441734678s)
	I0728 15:52:49.817775   29417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:52:49.827120   29417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:52:49.834217   29417 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:52:49.834261   29417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:52:49.841453   29417 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:52:49.841479   29417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:52:50.124147   29417 out.go:204]   - Generating certificates and keys ...
	I0728 15:52:50.778545   29417 out.go:204]   - Booting up control plane ...
	I0728 15:52:56.833665   29417 out.go:204]   - Configuring RBAC rules ...
	I0728 15:52:57.217898   29417 cni.go:95] Creating CNI manager for ""
	I0728 15:52:57.217910   29417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:52:57.217931   29417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 15:52:57.218029   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:57.218044   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551 minikube.k8s.io/name=embed-certs-20220728154707-12923 minikube.k8s.io/updated_at=2022_07_28T15_52_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:57.228119   29417 ops.go:34] apiserver oom_adj: -16
	I0728 15:52:57.347112   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:57.924017   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:58.422483   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:58.922247   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:59.422049   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:59.922434   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:00.421887   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:00.922294   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:01.422091   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:01.922001   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:02.422182   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:02.921905   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:03.422209   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:03.922915   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:04.422395   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:04.921979   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:05.422349   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:05.923128   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:06.421972   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:06.921905   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:07.423616   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:07.923778   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:08.422276   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:08.922086   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:09.421795   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:09.922089   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:09.976078   29417 kubeadm.go:1045] duration metric: took 12.758317756s to wait for elevateKubeSystemPrivileges.
	I0728 15:53:09.976125   29417 kubeadm.go:397] StartCluster complete in 4m48.092013069s
	I0728 15:53:09.976150   29417 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:53:09.976224   29417 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:53:09.976957   29417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:53:10.492087   29417 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220728154707-12923" rescaled to 1
	I0728 15:53:10.492129   29417 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 15:53:10.492148   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 15:53:10.492168   29417 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0728 15:53:10.515270   29417 out.go:177] * Verifying Kubernetes components...
	I0728 15:53:10.492298   29417 config.go:178] Loaded profile config "embed-certs-20220728154707-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:53:10.515348   29417 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220728154707-12923"
	I0728 15:53:10.515351   29417 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220728154707-12923"
	I0728 15:53:10.515351   29417 addons.go:65] Setting dashboard=true in profile "embed-certs-20220728154707-12923"
	I0728 15:53:10.515354   29417 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220728154707-12923"
	I0728 15:53:10.572172   29417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:53:10.572175   29417 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220728154707-12923"
	I0728 15:53:10.572176   29417 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220728154707-12923"
	W0728 15:53:10.572185   29417 addons.go:162] addon metrics-server should already be in state true
	I0728 15:53:10.572188   29417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220728154707-12923"
	W0728 15:53:10.572191   29417 addons.go:162] addon storage-provisioner should already be in state true
	I0728 15:53:10.572199   29417 addons.go:153] Setting addon dashboard=true in "embed-certs-20220728154707-12923"
	W0728 15:53:10.572224   29417 addons.go:162] addon dashboard should already be in state true
	I0728 15:53:10.572234   29417 host.go:66] Checking if "embed-certs-20220728154707-12923" exists ...
	I0728 15:53:10.572234   29417 host.go:66] Checking if "embed-certs-20220728154707-12923" exists ...
	I0728 15:53:10.572263   29417 host.go:66] Checking if "embed-certs-20220728154707-12923" exists ...
	I0728 15:53:10.572531   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:53:10.572687   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:53:10.573316   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:53:10.573324   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:53:10.581651   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0728 15:53:10.601502   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:53:10.693262   29417 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0728 15:53:10.701418   29417 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220728154707-12923"
	I0728 15:53:10.715188   29417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0728 15:53:10.752047   29417 addons.go:162] addon default-storageclass should already be in state true
	I0728 15:53:10.773336   29417 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0728 15:53:10.794538   29417 host.go:66] Checking if "embed-certs-20220728154707-12923" exists ...
	I0728 15:53:10.794655   29417 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:53:10.832023   29417 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0728 15:53:10.869363   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 15:53:10.869383   29417 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0728 15:53:10.906265   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0728 15:53:10.869758   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:53:10.885942   29417 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220728154707-12923" to be "Ready" ...
	I0728 15:53:10.906309   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0728 15:53:10.906320   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0728 15:53:10.906348   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:53:10.906364   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:53:10.906384   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:53:10.925260   29417 node_ready.go:49] node "embed-certs-20220728154707-12923" has status "Ready":"True"
	I0728 15:53:10.925280   29417 node_ready.go:38] duration metric: took 18.955735ms waiting for node "embed-certs-20220728154707-12923" to be "Ready" ...
	I0728 15:53:10.925288   29417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:53:10.939829   29417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vlhnt" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:11.006438   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:53:11.007876   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:53:11.008043   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:53:11.009282   29417 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 15:53:11.009297   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 15:53:11.009379   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:53:11.080509   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:53:11.222820   29417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:53:11.228693   29417 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0728 15:53:11.228705   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0728 15:53:11.319170   29417 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0728 15:53:11.319191   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0728 15:53:11.324514   29417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 15:53:11.337325   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0728 15:53:11.337338   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0728 15:53:11.414290   29417 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 15:53:11.414306   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0728 15:53:11.422951   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0728 15:53:11.422964   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0728 15:53:11.503900   29417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 15:53:11.519764   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0728 15:53:11.519786   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0728 15:53:11.600178   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0728 15:53:11.600192   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0728 15:53:11.703236   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0728 15:53:11.703255   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0728 15:53:11.843715   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0728 15:53:11.843732   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0728 15:53:11.861278   29417 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.279614695s)
	I0728 15:53:11.861298   29417 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0728 15:53:11.913809   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0728 15:53:11.913829   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0728 15:53:12.013028   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0728 15:53:12.013041   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0728 15:53:12.100678   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 15:53:12.100724   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0728 15:53:12.134931   29417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 15:53:12.316912   29417 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220728154707-12923"
	I0728 15:53:13.004539   29417 pod_ready.go:102] pod "coredns-6d4b75cb6d-vlhnt" in "kube-system" namespace has status "Ready":"False"
	I0728 15:53:13.038577   29417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0728 15:53:13.096070   29417 addons.go:414] enableAddons completed in 2.603951363s
	I0728 15:53:14.464520   29417 pod_ready.go:92] pod "coredns-6d4b75cb6d-vlhnt" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:14.464533   29417 pod_ready.go:81] duration metric: took 3.524743781s waiting for pod "coredns-6d4b75cb6d-vlhnt" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.464540   29417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-wnn6n" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.469514   29417 pod_ready.go:92] pod "coredns-6d4b75cb6d-wnn6n" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:14.469523   29417 pod_ready.go:81] duration metric: took 4.979226ms waiting for pod "coredns-6d4b75cb6d-wnn6n" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.469529   29417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.473830   29417 pod_ready.go:92] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:14.473838   29417 pod_ready.go:81] duration metric: took 4.304872ms waiting for pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.473846   29417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.478000   29417 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:14.478008   29417 pod_ready.go:81] duration metric: took 4.157902ms waiting for pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.478013   29417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.481921   29417 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:14.481928   29417 pod_ready.go:81] duration metric: took 3.910555ms waiting for pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.481934   29417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h9xkx" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.856785   29417 pod_ready.go:92] pod "kube-proxy-h9xkx" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:14.856795   29417 pod_ready.go:81] duration metric: took 374.863514ms waiting for pod "kube-proxy-h9xkx" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.856802   29417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:15.256337   29417 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:15.256346   29417 pod_ready.go:81] duration metric: took 399.54609ms waiting for pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:15.256352   29417 pod_ready.go:38] duration metric: took 4.331114559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:53:15.256372   29417 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:53:15.256416   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:53:15.269024   29417 api_server.go:71] duration metric: took 4.77695004s to wait for apiserver process to appear ...
	I0728 15:53:15.269042   29417 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:53:15.269050   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:53:15.274209   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 200:
	ok
	I0728 15:53:15.275350   29417 api_server.go:140] control plane version: v1.24.3
	I0728 15:53:15.275359   29417 api_server.go:130] duration metric: took 6.31257ms to wait for apiserver health ...
	I0728 15:53:15.275364   29417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:53:15.457117   29417 system_pods.go:59] 9 kube-system pods found
	I0728 15:53:15.457130   29417 system_pods.go:61] "coredns-6d4b75cb6d-vlhnt" [15178362-740f-412b-847e-671a674e7a79] Running
	I0728 15:53:15.457134   29417 system_pods.go:61] "coredns-6d4b75cb6d-wnn6n" [054ea91e-c444-438a-8f47-a758e4bba1ca] Running
	I0728 15:53:15.457137   29417 system_pods.go:61] "etcd-embed-certs-20220728154707-12923" [053ff81a-84c4-4823-b51f-8bdb7fb9b3e6] Running
	I0728 15:53:15.457146   29417 system_pods.go:61] "kube-apiserver-embed-certs-20220728154707-12923" [b07a2a80-df11-40ee-9227-b5704fd89d5b] Running
	I0728 15:53:15.457155   29417 system_pods.go:61] "kube-controller-manager-embed-certs-20220728154707-12923" [bb4e6302-2345-4f3a-9767-5d1ecbf95ca4] Running
	I0728 15:53:15.457159   29417 system_pods.go:61] "kube-proxy-h9xkx" [1cda32a7-99a3-47b3-b2e7-62ab885dc4d8] Running
	I0728 15:53:15.457163   29417 system_pods.go:61] "kube-scheduler-embed-certs-20220728154707-12923" [04849f66-86dc-44e9-b327-3ed02793e728] Running
	I0728 15:53:15.457168   29417 system_pods.go:61] "metrics-server-5c6f97fb75-c8z6p" [4971ee0b-7c98-4351-b8af-e8c7ac2c0605] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:53:15.457174   29417 system_pods.go:61] "storage-provisioner" [ab617b8b-a02e-4305-a631-7930f5b99a8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 15:53:15.457179   29417 system_pods.go:74] duration metric: took 181.814843ms to wait for pod list to return data ...
	I0728 15:53:15.457186   29417 default_sa.go:34] waiting for default service account to be created ...
	I0728 15:53:15.654167   29417 default_sa.go:45] found service account: "default"
	I0728 15:53:15.654178   29417 default_sa.go:55] duration metric: took 196.99155ms for default service account to be created ...
	I0728 15:53:15.654183   29417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 15:53:15.858047   29417 system_pods.go:86] 9 kube-system pods found
	I0728 15:53:15.858061   29417 system_pods.go:89] "coredns-6d4b75cb6d-vlhnt" [15178362-740f-412b-847e-671a674e7a79] Running
	I0728 15:53:15.858065   29417 system_pods.go:89] "coredns-6d4b75cb6d-wnn6n" [054ea91e-c444-438a-8f47-a758e4bba1ca] Running
	I0728 15:53:15.858069   29417 system_pods.go:89] "etcd-embed-certs-20220728154707-12923" [053ff81a-84c4-4823-b51f-8bdb7fb9b3e6] Running
	I0728 15:53:15.858080   29417 system_pods.go:89] "kube-apiserver-embed-certs-20220728154707-12923" [b07a2a80-df11-40ee-9227-b5704fd89d5b] Running
	I0728 15:53:15.858084   29417 system_pods.go:89] "kube-controller-manager-embed-certs-20220728154707-12923" [bb4e6302-2345-4f3a-9767-5d1ecbf95ca4] Running
	I0728 15:53:15.858087   29417 system_pods.go:89] "kube-proxy-h9xkx" [1cda32a7-99a3-47b3-b2e7-62ab885dc4d8] Running
	I0728 15:53:15.858091   29417 system_pods.go:89] "kube-scheduler-embed-certs-20220728154707-12923" [04849f66-86dc-44e9-b327-3ed02793e728] Running
	I0728 15:53:15.858097   29417 system_pods.go:89] "metrics-server-5c6f97fb75-c8z6p" [4971ee0b-7c98-4351-b8af-e8c7ac2c0605] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:53:15.858102   29417 system_pods.go:89] "storage-provisioner" [ab617b8b-a02e-4305-a631-7930f5b99a8e] Running
	I0728 15:53:15.858106   29417 system_pods.go:126] duration metric: took 203.923938ms to wait for k8s-apps to be running ...
	I0728 15:53:15.858113   29417 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 15:53:15.858161   29417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:53:15.868022   29417 system_svc.go:56] duration metric: took 9.903779ms WaitForService to wait for kubelet.
	I0728 15:53:15.868035   29417 kubeadm.go:572] duration metric: took 5.375974856s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0728 15:53:15.868053   29417 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:53:16.055052   29417 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:53:16.055065   29417 node_conditions.go:123] node cpu capacity is 6
	I0728 15:53:16.055072   29417 node_conditions.go:105] duration metric: took 187.017671ms to run NodePressure ...
	I0728 15:53:16.055080   29417 start.go:216] waiting for startup goroutines ...
	I0728 15:53:16.085957   29417 start.go:506] kubectl: 1.24.2, cluster: 1.24.3 (minor skew: 0)
	I0728 15:53:16.110027   29417 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220728154707-12923" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 22:48:18 UTC, end at Thu 2022-07-28 22:54:08 UTC. --
	Jul 28 22:52:48 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:48.486220852Z" level=info msg="ignoring event" container=b2482e25ce4dbb7ca2d6ae1fe4b6a32cfaff33d16548cd8d7c75db1c49296742 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:48 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:48.558245913Z" level=info msg="ignoring event" container=763e61a63e47b27883d467d767786fe1cb85944e9bdb121bc6f28c6a974ec849 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:48 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:48.710914996Z" level=info msg="ignoring event" container=c5c8e1968b45e3338619882384401aebfa121b24f84224543ed687e35accfbae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:48 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:48.786796700Z" level=info msg="ignoring event" container=2c58b3c05265c259edea9a01cfe95b2f57193d01a6227fb6723129b1414f141f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:48 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:48.858423240Z" level=info msg="ignoring event" container=163f6b16d266ded21aeae3ce3c700646a3f0df13a778bfea773b9053f91bad6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:48 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:48.925160553Z" level=info msg="ignoring event" container=e92de4c51671db7e33e76bf61580ed899b45cfd93da5414c3f9abc31d97a1ada module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:48 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:48.994461908Z" level=info msg="ignoring event" container=c9aba04e2f7482486d40fe0e6cd8b40e8cd555a4f15f10f4fd127d6ffa30d587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:49 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:49.076787967Z" level=info msg="ignoring event" container=ce4317009ff513d1a96c8da19af227208be8ffaa3aefe9a9eacea96ca7218bee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:49 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:49.151539750Z" level=info msg="ignoring event" container=13eff5c7dd5fdd9375d77c15f0cc1aefbacc1df3640c4cd445eec54a41145ecb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:49 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:49.278077109Z" level=info msg="ignoring event" container=c9abe0e4490b96b347ecc77858bfef579692de509a81136b880e33fa818d233b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:49 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:49.344884344Z" level=info msg="ignoring event" container=eecea093929f42328a8b9671a7d98a23e2685089a47f10fa1d670a8d1a4f5413 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:49 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:49.436510959Z" level=info msg="ignoring event" container=de2e7a9bcf7aff0d8880411a0bf89f0fd7f9c2950b1123f795fe0a5ee5934677 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:53:13 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:13.378953526Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:53:13 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:13.379013007Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:53:13 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:13.380220413Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:53:14 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:14.380050896Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 28 22:53:16 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:16.819685320Z" level=info msg="ignoring event" container=bd3261dbe9b41ce708a4745422851c03e95cdfbdceddd57a122830ebc43bd507 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:53:16 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:16.924714588Z" level=info msg="ignoring event" container=40023e75a887e0956796f79cbd51c5f7c9daef960d9f1d7a300f1efab22647cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:53:20 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:20.107278440Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 28 22:53:20 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:20.404106472Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 28 22:53:23 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:23.616644149Z" level=info msg="ignoring event" container=fcf362a6290929a14566d6fe437cac6eb9139cbfefcf9f01cd46b08440d027d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:53:23 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:23.944143588Z" level=info msg="ignoring event" container=62ed001fa5326b9de1c551d9fffde74d4073cf2f4b4a51319ea7e12b16b8ac57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:53:26 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:26.175377392Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:53:26 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:26.175871357Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:53:26 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:26.177051672Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	62ed001fa5326       a90209bb39e3d                                                                                    45 seconds ago       Exited              dashboard-metrics-scraper   1                   e2b0d6225baa7
	9c10e21c6fa3b       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   49 seconds ago       Running             kubernetes-dashboard        0                   4e59d3aaf71e4
	270dbea174a40       6e38f40d628db                                                                                    55 seconds ago       Running             storage-provisioner         0                   6852a67953e7f
	eb2450e94dc8f       a4ca41631cc7a                                                                                    57 seconds ago       Running             coredns                     0                   2d711d8816247
	3f91d11d220a4       2ae1ba6417cbc                                                                                    58 seconds ago       Running             kube-proxy                  0                   1012cd2650f09
	667a33b70f6db       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   e169a10e3ae9f
	d34e7a0e7b800       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   d0fe55c90c41b
	a4c3594d7a36c       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   b4d0babff3ed6
	f1d5808f4a8c8       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   1e95f71652260
	
	* 
	* ==> coredns [eb2450e94dc8] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220728154707-12923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220728154707-12923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
	                    minikube.k8s.io/name=embed-certs-20220728154707-12923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_28T15_52_57_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 22:52:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220728154707-12923
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 22:54:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Jul 2022 22:54:06 +0000   Thu, 28 Jul 2022 22:52:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Jul 2022 22:54:06 +0000   Thu, 28 Jul 2022 22:52:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Jul 2022 22:54:06 +0000   Thu, 28 Jul 2022 22:52:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Jul 2022 22:54:06 +0000   Thu, 28 Jul 2022 22:52:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220728154707-12923
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                a360caa3-de3e-4d13-bba8-b88d7ca01c92
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-vlhnt                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     59s
	  kube-system                 etcd-embed-certs-20220728154707-12923                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-embed-certs-20220728154707-12923             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-embed-certs-20220728154707-12923    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-h9xkx                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-embed-certs-20220728154707-12923             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 metrics-server-5c6f97fb75-c8z6p                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         57s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-tvjpw                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-nj4nt                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 57s   kube-proxy       
	  Normal  Starting                 72s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  72s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  72s   kubelet          Node embed-certs-20220728154707-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s   kubelet          Node embed-certs-20220728154707-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s   kubelet          Node embed-certs-20220728154707-12923 status is now: NodeHasSufficientPID
	  Normal  NodeReady                72s   kubelet          Node embed-certs-20220728154707-12923 status is now: NodeReady
	  Normal  RegisteredNode           60s   node-controller  Node embed-certs-20220728154707-12923 event: Registered Node embed-certs-20220728154707-12923 in Controller
	  Normal  Starting                 3s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s    kubelet          Node embed-certs-20220728154707-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet          Node embed-certs-20220728154707-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet          Node embed-certs-20220728154707-12923 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3s    kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [667a33b70f6d] <==
	* {"level":"info","ts":"2022-07-28T22:52:52.123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-07-28T22:52:52.123Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-07-28T22:52:52.125Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-28T22:52:52.125Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:52:52.125Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:52:52.125Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T22:52:52.125Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:embed-certs-20220728154707-12923 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T22:52:52.372Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T22:52:52.372Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:54:09 up  1:15,  0 users,  load average: 2.88, 1.31, 1.07
	Linux embed-certs-20220728154707-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [f1d5808f4a8c] <==
	* I0728 22:52:55.439988       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0728 22:52:55.692907       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0728 22:52:55.715469       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0728 22:52:55.783160       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0728 22:52:55.787275       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0728 22:52:55.788045       1 controller.go:611] quota admission added evaluator for: endpoints
	I0728 22:52:55.790500       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0728 22:52:56.573192       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0728 22:52:57.026902       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0728 22:52:57.032340       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0728 22:52:57.041008       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0728 22:52:57.125101       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0728 22:53:10.201247       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0728 22:53:10.251515       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0728 22:53:11.743094       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0728 22:53:12.326946       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.99.168.195]
	I0728 22:53:12.965424       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.105.174.105]
	I0728 22:53:13.027595       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.110.229.163]
	W0728 22:53:13.232538       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 22:53:13.232577       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0728 22:53:13.232583       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 22:53:13.232599       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 22:53:13.234651       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0728 22:53:13.234679       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d34e7a0e7b80] <==
	* I0728 22:53:10.452676       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-wnn6n"
	I0728 22:53:10.456557       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vlhnt"
	I0728 22:53:10.471543       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-wnn6n"
	I0728 22:53:12.145638       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0728 22:53:12.154590       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-c8z6p"
	I0728 22:53:12.824581       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0728 22:53:12.830994       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 22:53:12.833193       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0728 22:53:12.842398       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:53:12.844558       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:53:12.850626       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 22:53:12.851798       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:53:12.852304       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:53:12.862301       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 22:53:12.862315       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:53:12.862594       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 22:53:12.862680       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:53:12.866664       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:53:12.866791       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:53:12.870845       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:53:12.870977       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 22:53:12.922483       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-tvjpw"
	I0728 22:53:12.923592       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-nj4nt"
	E0728 22:54:06.422531       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0728 22:54:06.477627       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [3f91d11d220a] <==
	* I0728 22:53:11.559166       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0728 22:53:11.559279       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0728 22:53:11.559363       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 22:53:11.732009       1 server_others.go:206] "Using iptables Proxier"
	I0728 22:53:11.732037       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 22:53:11.732043       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 22:53:11.732054       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 22:53:11.732086       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:53:11.733039       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:53:11.737906       1 server.go:661] "Version info" version="v1.24.3"
	I0728 22:53:11.737942       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:53:11.738290       1 config.go:317] "Starting service config controller"
	I0728 22:53:11.738326       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 22:53:11.738399       1 config.go:444] "Starting node config controller"
	I0728 22:53:11.738407       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 22:53:11.738464       1 config.go:226] "Starting endpoint slice config controller"
	I0728 22:53:11.738470       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 22:53:11.838877       1 shared_informer.go:262] Caches are synced for node config
	I0728 22:53:11.838921       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 22:53:11.838924       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [a4c3594d7a36] <==
	* W0728 22:52:54.479794       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0728 22:52:54.479821       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0728 22:52:54.479760       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0728 22:52:54.479950       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0728 22:52:54.479979       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0728 22:52:54.480334       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0728 22:52:54.479901       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0728 22:52:54.480343       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0728 22:52:55.306677       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0728 22:52:55.306728       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0728 22:52:55.346997       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0728 22:52:55.347047       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0728 22:52:55.374495       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0728 22:52:55.374532       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0728 22:52:55.489410       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0728 22:52:55.489448       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0728 22:52:55.502639       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0728 22:52:55.502675       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0728 22:52:55.525592       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0728 22:52:55.525610       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0728 22:52:55.528392       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0728 22:52:55.528424       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0728 22:52:55.684282       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0728 22:52:55.684318       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0728 22:52:58.173751       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 22:48:18 UTC, end at Thu 2022-07-28 22:54:10 UTC. --
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.839877    9729 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.839920    9729 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.839950    9729 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.839985    9729 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.840075    9729 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.880796    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcp5g\" (UniqueName: \"kubernetes.io/projected/1cda32a7-99a3-47b3-b2e7-62ab885dc4d8-kube-api-access-xcp5g\") pod \"kube-proxy-h9xkx\" (UID: \"1cda32a7-99a3-47b3-b2e7-62ab885dc4d8\") " pod="kube-system/kube-proxy-h9xkx"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.880848    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ab617b8b-a02e-4305-a631-7930f5b99a8e-tmp\") pod \"storage-provisioner\" (UID: \"ab617b8b-a02e-4305-a631-7930f5b99a8e\") " pod="kube-system/storage-provisioner"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.880866    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4971ee0b-7c98-4351-b8af-e8c7ac2c0605-tmp-dir\") pod \"metrics-server-5c6f97fb75-c8z6p\" (UID: \"4971ee0b-7c98-4351-b8af-e8c7ac2c0605\") " pod="kube-system/metrics-server-5c6f97fb75-c8z6p"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.880884    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwdg\" (UniqueName: \"kubernetes.io/projected/4971ee0b-7c98-4351-b8af-e8c7ac2c0605-kube-api-access-zkwdg\") pod \"metrics-server-5c6f97fb75-c8z6p\" (UID: \"4971ee0b-7c98-4351-b8af-e8c7ac2c0605\") " pod="kube-system/metrics-server-5c6f97fb75-c8z6p"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.880900    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvf57\" (UniqueName: \"kubernetes.io/projected/ab617b8b-a02e-4305-a631-7930f5b99a8e-kube-api-access-dvf57\") pod \"storage-provisioner\" (UID: \"ab617b8b-a02e-4305-a631-7930f5b99a8e\") " pod="kube-system/storage-provisioner"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.880916    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cda32a7-99a3-47b3-b2e7-62ab885dc4d8-xtables-lock\") pod \"kube-proxy-h9xkx\" (UID: \"1cda32a7-99a3-47b3-b2e7-62ab885dc4d8\") " pod="kube-system/kube-proxy-h9xkx"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.880986    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cda32a7-99a3-47b3-b2e7-62ab885dc4d8-lib-modules\") pod \"kube-proxy-h9xkx\" (UID: \"1cda32a7-99a3-47b3-b2e7-62ab885dc4d8\") " pod="kube-system/kube-proxy-h9xkx"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881011    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpq86\" (UniqueName: \"kubernetes.io/projected/15178362-740f-412b-847e-671a674e7a79-kube-api-access-fpq86\") pod \"coredns-6d4b75cb6d-vlhnt\" (UID: \"15178362-740f-412b-847e-671a674e7a79\") " pod="kube-system/coredns-6d4b75cb6d-vlhnt"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881051    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab0cd45a-b372-4bd8-bc60-9d8a65175c7c-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-tvjpw\" (UID: \"ab0cd45a-b372-4bd8-bc60-9d8a65175c7c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tvjpw"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881098    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4ebca962-177f-4a70-9d72-b89712a84628-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-nj4nt\" (UID: \"4ebca962-177f-4a70-9d72-b89712a84628\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-nj4nt"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881118    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlk88\" (UniqueName: \"kubernetes.io/projected/ab0cd45a-b372-4bd8-bc60-9d8a65175c7c-kube-api-access-dlk88\") pod \"dashboard-metrics-scraper-dffd48c4c-tvjpw\" (UID: \"ab0cd45a-b372-4bd8-bc60-9d8a65175c7c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tvjpw"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881136    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1cda32a7-99a3-47b3-b2e7-62ab885dc4d8-kube-proxy\") pod \"kube-proxy-h9xkx\" (UID: \"1cda32a7-99a3-47b3-b2e7-62ab885dc4d8\") " pod="kube-system/kube-proxy-h9xkx"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881150    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbrsn\" (UniqueName: \"kubernetes.io/projected/4ebca962-177f-4a70-9d72-b89712a84628-kube-api-access-dbrsn\") pod \"kubernetes-dashboard-5fd5574d9f-nj4nt\" (UID: \"4ebca962-177f-4a70-9d72-b89712a84628\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-nj4nt"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881167    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15178362-740f-412b-847e-671a674e7a79-config-volume\") pod \"coredns-6d4b75cb6d-vlhnt\" (UID: \"15178362-740f-412b-847e-671a674e7a79\") " pod="kube-system/coredns-6d4b75cb6d-vlhnt"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881177    9729 reconciler.go:157] "Reconciler: start to sync state"
	Jul 28 22:54:09 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:09.037493    9729 request.go:601] Waited for 1.150296862s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 28 22:54:09 embed-certs-20220728154707-12923 kubelet[9729]: E0728 22:54:09.100937    9729 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220728154707-12923\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220728154707-12923"
	Jul 28 22:54:09 embed-certs-20220728154707-12923 kubelet[9729]: E0728 22:54:09.265816    9729 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220728154707-12923\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220728154707-12923"
	Jul 28 22:54:09 embed-certs-20220728154707-12923 kubelet[9729]: E0728 22:54:09.482107    9729 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220728154707-12923\" already exists" pod="kube-system/etcd-embed-certs-20220728154707-12923"
	Jul 28 22:54:09 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:09.941393    9729 scope.go:110] "RemoveContainer" containerID="62ed001fa5326b9de1c551d9fffde74d4073cf2f4b4a51319ea7e12b16b8ac57"
	
	* 
	* ==> kubernetes-dashboard [9c10e21c6fa3] <==
	* 2022/07/28 22:53:19 Using namespace: kubernetes-dashboard
	2022/07/28 22:53:19 Using in-cluster config to connect to apiserver
	2022/07/28 22:53:19 Using secret token for csrf signing
	2022/07/28 22:53:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/28 22:53:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/28 22:53:19 Successful initial request to the apiserver, version: v1.24.3
	2022/07/28 22:53:19 Generating JWE encryption key
	2022/07/28 22:53:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/28 22:53:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/28 22:53:19 Initializing JWE encryption key from synchronized object
	2022/07/28 22:53:19 Creating in-cluster Sidecar client
	2022/07/28 22:53:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/28 22:53:19 Serving insecurely on HTTP port: 9090
	2022/07/28 22:54:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/28 22:53:19 Starting overwatch
	
	* 
	* ==> storage-provisioner [270dbea174a4] <==
	* I0728 22:53:13.228473       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 22:53:13.237230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 22:53:13.237275       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 22:53:13.242360       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 22:53:13.242466       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220728154707-12923_500a6a72-89ba-48ed-b2d0-5c509346f99f!
	I0728 22:53:13.242677       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"53f95cd3-6964-4e7e-aa1a-4a3695ff925a", APIVersion:"v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220728154707-12923_500a6a72-89ba-48ed-b2d0-5c509346f99f became leader
	I0728 22:53:13.342924       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220728154707-12923_500a6a72-89ba-48ed-b2d0-5c509346f99f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220728154707-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-c8z6p
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220728154707-12923 describe pod metrics-server-5c6f97fb75-c8z6p
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220728154707-12923 describe pod metrics-server-5c6f97fb75-c8z6p: exit status 1 (268.394375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-c8z6p" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220728154707-12923 describe pod metrics-server-5c6f97fb75-c8z6p: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220728154707-12923
helpers_test.go:235: (dbg) docker inspect embed-certs-20220728154707-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c8903a71c1cd8f52218490628108c4a9ed7d61f5462af3eb2947b10273b7e2f7",
	        "Created": "2022-07-28T22:47:13.928680238Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 267797,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:48:18.01746171Z",
	            "FinishedAt": "2022-07-28T22:48:16.073367691Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/c8903a71c1cd8f52218490628108c4a9ed7d61f5462af3eb2947b10273b7e2f7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c8903a71c1cd8f52218490628108c4a9ed7d61f5462af3eb2947b10273b7e2f7/hostname",
	        "HostsPath": "/var/lib/docker/containers/c8903a71c1cd8f52218490628108c4a9ed7d61f5462af3eb2947b10273b7e2f7/hosts",
	        "LogPath": "/var/lib/docker/containers/c8903a71c1cd8f52218490628108c4a9ed7d61f5462af3eb2947b10273b7e2f7/c8903a71c1cd8f52218490628108c4a9ed7d61f5462af3eb2947b10273b7e2f7-json.log",
	        "Name": "/embed-certs-20220728154707-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220728154707-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220728154707-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/663997aeeae59a48d23373ab7860a74585cc1ef039d3f4f056fdf93dafa294ec-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/663997aeeae59a48d23373ab7860a74585cc1ef039d3f4f056fdf93dafa294ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/663997aeeae59a48d23373ab7860a74585cc1ef039d3f4f056fdf93dafa294ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/663997aeeae59a48d23373ab7860a74585cc1ef039d3f4f056fdf93dafa294ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220728154707-12923",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220728154707-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220728154707-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220728154707-12923",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220728154707-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d12fbc03db54d75685e0fd309090c0db0f4afea6a9a4ce714c86ccea24f59d6b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59130"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59131"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59132"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59133"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d12fbc03db54",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220728154707-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c8903a71c1cd",
	                        "embed-certs-20220728154707-12923"
	                    ],
	                    "NetworkID": "a5e0757ffa1aeaefcad3caee47c45090e6ace6f23ba00064b8d9c81124c9d655",
	                    "EndpointID": "5369407a8ce361251bd52c07665c2df50a11be5bc36e51f2d0f029bf3419fd36",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-20220728154707-12923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p embed-certs-20220728154707-12923 logs -n 25: (2.789015472s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|--------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |               Profile                |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|--------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | old-k8s-version-20220728153807-12923 | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                      |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                      |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                      |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                      |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                      |         |         |                     |                     |
	| ssh     | -p                                                | kubenet-20220728152330-12923         | jenkins | v1.26.0 | 28 Jul 22 15:38 PDT | 28 Jul 22 15:38 PDT |
	|         | kubenet-20220728152330-12923                      |                                      |         |         |                     |                     |
	|         | pgrep -a kubelet                                  |                                      |         |         |                     |                     |
	| delete  | -p                                                | kubenet-20220728152330-12923         | jenkins | v1.26.0 | 28 Jul 22 15:39 PDT | 28 Jul 22 15:39 PDT |
	|         | kubenet-20220728152330-12923                      |                                      |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:39 PDT | 28 Jul 22 15:40 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                      |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:40 PDT | 28 Jul 22 15:40 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                      |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                      |         |         |                     |                     |
	| stop    | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:40 PDT | 28 Jul 22 15:41 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                      |         |         |                     |                     |
	| addons  | enable dashboard -p                               | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:41 PDT | 28 Jul 22 15:41 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                      |         |         |                     |                     |
	| start   | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:41 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                      |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220728153807-12923 | jenkins | v1.26.0 | 28 Jul 22 15:42 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                      |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                      |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                      |         |         |                     |                     |
	| stop    | -p                                                | old-k8s-version-20220728153807-12923 | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT | 28 Jul 22 15:43 PDT |
	|         | old-k8s-version-20220728153807-12923              |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                      |         |         |                     |                     |
	| addons  | enable dashboard -p                               | old-k8s-version-20220728153807-12923 | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT | 28 Jul 22 15:43 PDT |
	|         | old-k8s-version-20220728153807-12923              |                                      |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                      |         |         |                     |                     |
	| start   | -p                                                | old-k8s-version-20220728153807-12923 | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                      |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                      |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                      |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                      |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                      |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                      |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220728153949-12923      | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | no-preload-20220728153949-12923                   |                                      |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                      |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                      |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                      |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                      |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                      |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                      |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                      |         |         |                     |                     |
	|         | --driver=docker                                   |                                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                      |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                      |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220728154707-12923     | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                  |                                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                      |         |         |                     |                     |
	|---------|---------------------------------------------------|--------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 15:48:16
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 15:48:16.801497   29417 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:48:16.801653   29417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:48:16.801659   29417 out.go:309] Setting ErrFile to fd 2...
	I0728 15:48:16.801663   29417 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:48:16.801763   29417 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:48:16.802276   29417 out.go:303] Setting JSON to false
	I0728 15:48:16.817752   29417 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9538,"bootTime":1659038958,"procs":351,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:48:16.817842   29417 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:48:16.838782   29417 out.go:177] * [embed-certs-20220728154707-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:48:16.880995   29417 notify.go:193] Checking for updates...
	I0728 15:48:16.901706   29417 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:48:16.927978   29417 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:48:16.949282   29417 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:48:16.971082   29417 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:48:16.992933   29417 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:48:17.014719   29417 config.go:178] Loaded profile config "embed-certs-20220728154707-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:48:17.015366   29417 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:48:17.082982   29417 docker.go:137] docker version: linux-20.10.17
	I0728 15:48:17.083098   29417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:48:17.212741   29417 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:48:17.139329135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:48:17.234803   29417 out.go:177] * Using the docker driver based on existing profile
	I0728 15:48:17.256309   29417 start.go:284] selected driver: docker
	I0728 15:48:17.256336   29417 start.go:808] validating driver "docker" against &{Name:embed-certs-20220728154707-12923 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220728154707-12923 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:48:17.256500   29417 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:48:17.259510   29417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:48:17.389758   29417 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:48:17.31689262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:48:17.389932   29417 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:48:17.389950   29417 cni.go:95] Creating CNI manager for ""
	I0728 15:48:17.389959   29417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:48:17.389967   29417 start_flags.go:310] config:
	{Name:embed-certs-20220728154707-12923 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220728154707-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cl
uster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:48:17.433599   29417 out.go:177] * Starting control plane node embed-certs-20220728154707-12923 in cluster embed-certs-20220728154707-12923
	I0728 15:48:17.455669   29417 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:48:17.477751   29417 out.go:177] * Pulling base image ...
	I0728 15:48:17.519730   29417 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:48:17.519790   29417 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:48:17.519811   29417 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 15:48:17.519834   29417 cache.go:57] Caching tarball of preloaded images
	I0728 15:48:17.520022   29417 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:48:17.520061   29417 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 15:48:17.521034   29417 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/config.json ...
	I0728 15:48:17.586890   29417 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:48:17.586906   29417 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:48:17.586916   29417 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:48:17.586967   29417 start.go:370] acquiring machines lock for embed-certs-20220728154707-12923: {Name:mkafc927efa8de6adf00771129c22ebc3d05578e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:48:17.587045   29417 start.go:374] acquired machines lock for "embed-certs-20220728154707-12923" in 61.043µs
	I0728 15:48:17.587065   29417 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:48:17.587073   29417 fix.go:55] fixHost starting: 
	I0728 15:48:17.587306   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:48:17.651525   29417 fix.go:103] recreateIfNeeded on embed-certs-20220728154707-12923: state=Stopped err=<nil>
	W0728 15:48:17.651560   29417 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:48:17.673382   29417 out.go:177] * Restarting existing docker container for "embed-certs-20220728154707-12923" ...
	I0728 15:48:17.694189   29417 cli_runner.go:164] Run: docker start embed-certs-20220728154707-12923
	I0728 15:48:18.024290   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:48:18.089501   29417 kic.go:415] container "embed-certs-20220728154707-12923" state is running.
	I0728 15:48:18.090096   29417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220728154707-12923
	I0728 15:48:18.159136   29417 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/config.json ...
	I0728 15:48:18.159526   29417 machine.go:88] provisioning docker machine ...
	I0728 15:48:18.159550   29417 ubuntu.go:169] provisioning hostname "embed-certs-20220728154707-12923"
	I0728 15:48:18.159621   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:18.226433   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:18.226656   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:18.226667   29417 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220728154707-12923 && echo "embed-certs-20220728154707-12923" | sudo tee /etc/hostname
	I0728 15:48:18.356370   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220728154707-12923
	
	I0728 15:48:18.356474   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:18.422345   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:18.422505   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:18.422519   29417 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220728154707-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220728154707-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220728154707-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:48:18.541839   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:48:18.541862   29417 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:48:18.541893   29417 ubuntu.go:177] setting up certificates
	I0728 15:48:18.541906   29417 provision.go:83] configureAuth start
	I0728 15:48:18.541981   29417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220728154707-12923
	I0728 15:48:18.607419   29417 provision.go:138] copyHostCerts
	I0728 15:48:18.607510   29417 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:48:18.607520   29417 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:48:18.607611   29417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:48:18.607810   29417 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:48:18.607820   29417 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:48:18.607885   29417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:48:18.608037   29417 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:48:18.608043   29417 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:48:18.608100   29417 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:48:18.608266   29417 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220728154707-12923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220728154707-12923]
	I0728 15:48:18.782136   29417 provision.go:172] copyRemoteCerts
	I0728 15:48:18.782203   29417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:48:18.782257   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:18.846797   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:18.934758   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:48:18.952068   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0728 15:48:18.968832   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 15:48:18.984908   29417 provision.go:86] duration metric: configureAuth took 442.991274ms
	I0728 15:48:18.984926   29417 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:48:18.985086   29417 config.go:178] Loaded profile config "embed-certs-20220728154707-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:48:18.985144   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.049284   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:19.049430   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:19.049439   29417 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:48:19.169479   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:48:19.169492   29417 ubuntu.go:71] root file system type: overlay
	I0728 15:48:19.169633   29417 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:48:19.169705   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.234334   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:19.234474   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:19.234535   29417 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:48:19.363314   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:48:19.363389   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.474028   29417 main.go:134] libmachine: Using SSH client type: native
	I0728 15:48:19.474168   29417 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59134 <nil> <nil>}
	I0728 15:48:19.474181   29417 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:48:19.600015   29417 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:48:19.600035   29417 machine.go:91] provisioned docker machine in 1.440525827s
	I0728 15:48:19.600045   29417 start.go:307] post-start starting for "embed-certs-20220728154707-12923" (driver="docker")
	I0728 15:48:19.600050   29417 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:48:19.600116   29417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:48:19.600159   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.664230   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:19.753490   29417 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:48:19.756748   29417 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:48:19.756763   29417 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:48:19.756771   29417 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:48:19.756775   29417 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:48:19.756786   29417 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:48:19.756892   29417 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:48:19.757034   29417 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:48:19.757184   29417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:48:19.764556   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:48:19.780836   29417 start.go:310] post-start completed in 180.786416ms
	I0728 15:48:19.780924   29417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:48:19.780983   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.845831   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:19.931331   29417 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:48:19.935737   29417 fix.go:57] fixHost completed within 2.348703259s
	I0728 15:48:19.935747   29417 start.go:82] releasing machines lock for "embed-certs-20220728154707-12923", held for 2.348733925s
	I0728 15:48:19.935815   29417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220728154707-12923
	I0728 15:48:19.999291   29417 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:48:19.999293   29417 ssh_runner.go:195] Run: systemctl --version
	I0728 15:48:19.999351   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:19.999350   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:20.066602   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:20.066639   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:48:20.336780   29417 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:48:20.345950   29417 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:48:20.346003   29417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:48:20.357290   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:48:20.369623   29417 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:48:20.444639   29417 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:48:20.511294   29417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:48:20.570419   29417 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:48:20.816253   29417 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 15:48:20.888550   29417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:48:20.958099   29417 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 15:48:20.967785   29417 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 15:48:20.967856   29417 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 15:48:20.971649   29417 start.go:471] Will wait 60s for crictl version
	I0728 15:48:20.971697   29417 ssh_runner.go:195] Run: sudo crictl version
	I0728 15:48:21.074684   29417 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 15:48:21.074748   29417 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:48:21.109656   29417 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:48:21.190920   29417 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 15:48:21.191106   29417 cli_runner.go:164] Run: docker exec -t embed-certs-20220728154707-12923 dig +short host.docker.internal
	I0728 15:48:21.315815   29417 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:48:21.315918   29417 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:48:21.320085   29417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:48:21.329958   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:21.394974   29417 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:48:21.395041   29417 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:48:21.425612   29417 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0728 15:48:21.425628   29417 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:48:21.425703   29417 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:48:21.455867   29417 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0728 15:48:21.455887   29417 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:48:21.455976   29417 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:48:21.528212   29417 cni.go:95] Creating CNI manager for ""
	I0728 15:48:21.528224   29417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:48:21.528238   29417 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:48:21.528250   29417 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220728154707-12923 NodeName:embed-certs-20220728154707-12923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile
:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:48:21.528359   29417 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-20220728154707-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:48:21.528439   29417 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-20220728154707-12923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220728154707-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 15:48:21.528500   29417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 15:48:21.536494   29417 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:48:21.536551   29417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:48:21.543737   29417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (494 bytes)
	I0728 15:48:21.556138   29417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:48:21.568510   29417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0728 15:48:21.580344   29417 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:48:21.583879   29417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:48:21.593400   29417 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923 for IP: 192.168.67.2
	I0728 15:48:21.593521   29417 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:48:21.593573   29417 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:48:21.593648   29417 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/client.key
	I0728 15:48:21.593716   29417 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/apiserver.key.c7fa3a9e
	I0728 15:48:21.593765   29417 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/proxy-client.key
	I0728 15:48:21.593961   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:48:21.593997   29417 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:48:21.594013   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:48:21.594046   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:48:21.594075   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:48:21.594102   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:48:21.594168   29417 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:48:21.594672   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:48:21.611272   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 15:48:21.627747   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:48:21.644521   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/embed-certs-20220728154707-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 15:48:21.661383   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:48:21.677553   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:48:21.693614   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:48:21.710372   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:48:21.727241   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:48:21.743529   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:48:21.760108   29417 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:48:21.776707   29417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:48:21.790531   29417 ssh_runner.go:195] Run: openssl version
	I0728 15:48:21.796365   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:48:21.803971   29417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:48:21.829542   29417 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:48:21.829588   29417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:48:21.834653   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:48:21.841901   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:48:21.849320   29417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:48:21.853199   29417 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:48:21.853254   29417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:48:21.858491   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:48:21.865400   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:48:21.872773   29417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:48:21.876616   29417 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:48:21.876657   29417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:48:21.881915   29417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:48:21.888912   29417 kubeadm.go:395] StartCluster: {Name:embed-certs-20220728154707-12923 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:embed-certs-20220728154707-12923 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expose
dPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:48:21.889007   29417 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:48:21.917473   29417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:48:21.925062   29417 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 15:48:21.925078   29417 kubeadm.go:626] restartCluster start
	I0728 15:48:21.925120   29417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 15:48:21.931678   29417 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:21.931736   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:48:21.995670   29417 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220728154707-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:48:21.995837   29417 kubeconfig.go:127] "embed-certs-20220728154707-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 15:48:21.996174   29417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:48:21.997302   29417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 15:48:22.004907   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.004963   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.013141   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:22.215267   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.215476   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.226779   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:22.415295   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.415493   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.426659   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:22.614923   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.615024   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.625988   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:22.813546   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:22.813626   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:22.822993   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.015317   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.015544   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.025792   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.215277   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.215416   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.226602   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.413566   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.413660   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.424136   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.613588   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.613654   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.622662   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:23.814664   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:23.814775   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:23.825692   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.015300   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.015423   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.026137   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.215292   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.215465   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.226205   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.413410   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.413583   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.423694   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.614714   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.614873   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.625456   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:24.814430   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:24.814534   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:24.825126   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.013304   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:25.013447   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:25.024258   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.024268   29417 api_server.go:165] Checking apiserver status ...
	I0728 15:48:25.024313   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:48:25.032119   29417 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.032130   29417 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 15:48:25.032138   29417 kubeadm.go:1092] stopping kube-system containers ...
	I0728 15:48:25.032191   29417 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:48:25.061839   29417 docker.go:443] Stopping containers: [29457d4df313 6ef1cce588df 7d5bc5e389db 8302d6c6443f cdfeb1ef5523 0a683159e91c 25acc0420e55 d443c52be75c a610b508b031 06e6110a99f1 8e2b7312dd3c 7c97803e2ece 4ef5ce52c329 2f78d4daa24c 6855ea1ad5cf f6b3a99aae08]
	I0728 15:48:25.061917   29417 ssh_runner.go:195] Run: docker stop 29457d4df313 6ef1cce588df 7d5bc5e389db 8302d6c6443f cdfeb1ef5523 0a683159e91c 25acc0420e55 d443c52be75c a610b508b031 06e6110a99f1 8e2b7312dd3c 7c97803e2ece 4ef5ce52c329 2f78d4daa24c 6855ea1ad5cf f6b3a99aae08
	I0728 15:48:25.098171   29417 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 15:48:25.111390   29417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:48:25.119798   29417 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 28 22:47 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 28 22:47 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Jul 28 22:47 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 28 22:47 /etc/kubernetes/scheduler.conf
	
	I0728 15:48:25.119853   29417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 15:48:25.127053   29417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 15:48:25.134998   29417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 15:48:25.142975   29417 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.143035   29417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 15:48:25.150202   29417 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 15:48:25.157246   29417 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:48:25.157293   29417 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 15:48:25.164016   29417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:48:25.171050   29417 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 15:48:25.171059   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:25.216093   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:25.844343   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:26.021662   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:26.072258   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:26.134783   29417 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:48:26.134841   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:48:26.644326   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:48:27.144282   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:48:27.154756   29417 api_server.go:71] duration metric: took 1.019990602s to wait for apiserver process to appear ...
	I0728 15:48:27.154775   29417 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:48:27.154789   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:27.155957   29417 api_server.go:256] stopped: https://127.0.0.1:59133/healthz: Get "https://127.0.0.1:59133/healthz": EOF
	I0728 15:48:27.657022   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:30.452815   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 15:48:30.452848   29417 api_server.go:102] status: https://127.0.0.1:59133/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 15:48:30.657052   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:30.662787   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:48:30.662799   29417 api_server.go:102] status: https://127.0.0.1:59133/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:48:31.156224   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:31.161953   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:48:31.161970   29417 api_server.go:102] status: https://127.0.0.1:59133/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:48:31.656203   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:48:31.664096   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 200:
	ok
	I0728 15:48:31.670530   29417 api_server.go:140] control plane version: v1.24.3
	I0728 15:48:31.670542   29417 api_server.go:130] duration metric: took 4.515837693s to wait for apiserver health ...
	I0728 15:48:31.670547   29417 cni.go:95] Creating CNI manager for ""
	I0728 15:48:31.670552   29417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:48:31.670562   29417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:48:31.676716   29417 system_pods.go:59] 8 kube-system pods found
	I0728 15:48:31.676732   29417 system_pods.go:61] "coredns-6d4b75cb6d-sz4ss" [b1735e46-67cb-4a2a-9a12-260c98968b3a] Running
	I0728 15:48:31.676746   29417 system_pods.go:61] "etcd-embed-certs-20220728154707-12923" [a389e720-76d6-499e-b34e-3f8013bce707] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 15:48:31.676751   29417 system_pods.go:61] "kube-apiserver-embed-certs-20220728154707-12923" [b5ea0f7a-5c6d-41c4-8e7c-f86e90a70222] Running
	I0728 15:48:31.676757   29417 system_pods.go:61] "kube-controller-manager-embed-certs-20220728154707-12923" [9c192195-2527-4438-aa9b-bffc0aebccd1] Running
	I0728 15:48:31.676760   29417 system_pods.go:61] "kube-proxy-hhj48" [11442494-68d4-468e-b506-0302c7692a8d] Running
	I0728 15:48:31.676765   29417 system_pods.go:61] "kube-scheduler-embed-certs-20220728154707-12923" [37f5c49d-8386-4440-ba01-f9d4a3eb7d05] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 15:48:31.676772   29417 system_pods.go:61] "metrics-server-5c6f97fb75-b525p" [1aad746e-e8e5-44ae-a006-2655a20b240b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:48:31.676777   29417 system_pods.go:61] "storage-provisioner" [0e4110a8-11ee-4fe9-8b5e-6874c4466099] Running
	I0728 15:48:31.676780   29417 system_pods.go:74] duration metric: took 6.214305ms to wait for pod list to return data ...
	I0728 15:48:31.676788   29417 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:48:31.679407   29417 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:48:31.679420   29417 node_conditions.go:123] node cpu capacity is 6
	I0728 15:48:31.679429   29417 node_conditions.go:105] duration metric: took 2.637683ms to run NodePressure ...
	I0728 15:48:31.679439   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:48:31.806182   29417 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0728 15:48:31.829087   29417 kubeadm.go:777] kubelet initialised
	I0728 15:48:31.829099   29417 kubeadm.go:778] duration metric: took 4.724763ms waiting for restarted kubelet to initialise ...
	I0728 15:48:31.829106   29417 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:48:31.835265   29417 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-sz4ss" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:31.839853   29417 pod_ready.go:92] pod "coredns-6d4b75cb6d-sz4ss" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:31.839863   29417 pod_ready.go:81] duration metric: took 4.583461ms waiting for pod "coredns-6d4b75cb6d-sz4ss" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:31.839869   29417 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:33.853326   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:36.349932   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:38.351762   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:40.850107   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:42.852862   29417 pod_ready.go:102] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:44.852267   29417 pod_ready.go:92] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:44.852279   29417 pod_ready.go:81] duration metric: took 13.012622945s waiting for pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:44.852286   29417 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:44.857375   29417 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:44.857384   29417 pod_ready.go:81] duration metric: took 5.09412ms waiting for pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:44.857392   29417 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:46.867189   29417 pod_ready.go:102] pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:47.367656   29417 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:47.367668   29417 pod_ready.go:81] duration metric: took 2.510313483s waiting for pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.367678   29417 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hhj48" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.371404   29417 pod_ready.go:92] pod "kube-proxy-hhj48" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:47.371412   29417 pod_ready.go:81] duration metric: took 3.727786ms waiting for pod "kube-proxy-hhj48" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.371420   29417 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.375602   29417 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:48:47.375609   29417 pod_ready.go:81] duration metric: took 4.18498ms waiting for pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:47.375616   29417 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace to be "Ready" ...
	I0728 15:48:49.387717   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:51.885666   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:53.886054   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:56.384883   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:48:58.388307   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:00.887815   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:03.384496   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:05.387715   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:07.885515   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:09.885979   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:11.887797   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:14.388045   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:16.885529   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:18.885542   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:20.887317   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:23.386722   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:25.387475   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:27.885734   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:29.887218   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:32.385298   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:34.884952   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:36.886061   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:38.887042   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:41.384351   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:43.385883   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:45.885734   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:48.387488   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:50.888333   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:53.387254   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:55.883448   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	W0728 15:49:56.989217   28750 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0728 15:49:56.989249   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0728 15:49:57.413331   28750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:49:57.423140   28750 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:49:57.423194   28750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:49:57.430740   28750 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:49:57.430758   28750 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:49:58.137590   28750 out.go:204]   - Generating certificates and keys ...
	I0728 15:49:58.961107   28750 out.go:204]   - Booting up control plane ...
	I0728 15:49:57.884245   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:49:59.887006   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:02.387156   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:04.886992   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:07.387540   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:09.884586   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:11.887663   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:14.385348   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:16.386759   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:18.885993   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:21.386518   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:23.883548   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:25.883909   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:27.885135   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:29.886340   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:32.385238   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:34.885505   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:36.885858   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:39.385952   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:41.386558   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:43.884916   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:46.385722   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:48.885851   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:51.386119   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:53.885644   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:56.384861   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:50:58.385124   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:00.385353   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:02.884249   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:04.885728   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:07.385528   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:09.885039   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:11.885545   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:14.385339   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:16.885351   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:18.887968   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:21.382353   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:23.385169   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:25.885193   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:28.385639   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:30.885396   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:33.384905   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:35.884757   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:38.382709   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:40.385773   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:42.883260   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:45.384114   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:47.883140   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:49.883283   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:53.880636   28750 kubeadm.go:397] StartCluster complete in 7m58.936593061s
	I0728 15:51:53.880713   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0728 15:51:53.913209   28750 logs.go:274] 0 containers: []
	W0728 15:51:53.913222   28750 logs.go:276] No container was found matching "kube-apiserver"
	I0728 15:51:53.913282   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0728 15:51:53.943409   28750 logs.go:274] 0 containers: []
	W0728 15:51:53.943421   28750 logs.go:276] No container was found matching "etcd"
	I0728 15:51:53.943481   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0728 15:51:53.973451   28750 logs.go:274] 0 containers: []
	W0728 15:51:53.973463   28750 logs.go:276] No container was found matching "coredns"
	I0728 15:51:53.973516   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0728 15:51:54.002910   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.002922   28750 logs.go:276] No container was found matching "kube-scheduler"
	I0728 15:51:54.002981   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0728 15:51:54.035653   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.035665   28750 logs.go:276] No container was found matching "kube-proxy"
	I0728 15:51:54.035724   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0728 15:51:54.068593   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.068606   28750 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0728 15:51:54.068668   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0728 15:51:54.098273   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.098285   28750 logs.go:276] No container was found matching "storage-provisioner"
	I0728 15:51:54.098344   28750 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0728 15:51:54.127232   28750 logs.go:274] 0 containers: []
	W0728 15:51:54.127244   28750 logs.go:276] No container was found matching "kube-controller-manager"
	I0728 15:51:54.127252   28750 logs.go:123] Gathering logs for container status ...
	I0728 15:51:54.127259   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0728 15:51:56.179496   28750 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052257174s)
	I0728 15:51:56.179636   28750 logs.go:123] Gathering logs for kubelet ...
	I0728 15:51:56.179644   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0728 15:51:56.220729   28750 logs.go:123] Gathering logs for dmesg ...
	I0728 15:51:56.220744   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0728 15:51:56.232226   28750 logs.go:123] Gathering logs for describe nodes ...
	I0728 15:51:56.232240   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0728 15:51:56.289365   28750 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0728 15:51:56.289376   28750 logs.go:123] Gathering logs for Docker ...
	I0728 15:51:56.289383   28750 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0728 15:51:56.303663   28750 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0728 15:51:56.303683   28750 out.go:239] * 
	W0728 15:51:56.303804   28750 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:51:56.303823   28750 out.go:239] * 
	W0728 15:51:56.304345   28750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0728 15:51:56.399661   28750 out.go:177] 
	W0728 15:51:56.441944   28750 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.17. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0728 15:51:56.442070   28750 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0728 15:51:56.442149   28750 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0728 15:51:56.483570   28750 out.go:177] 
	I0728 15:51:51.885062   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:54.384742   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:56.405584   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:51:58.885045   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:01.382557   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:03.384708   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:05.883834   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:08.383440   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:10.884370   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:13.384787   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:15.882789   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:17.883026   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:19.884572   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:22.381729   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:24.385454   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:26.884654   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:29.384420   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:31.883692   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:34.382130   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:36.383023   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:38.885843   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:41.383884   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:43.880996   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:45.882240   29417 pod_ready.go:102] pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace has status "Ready":"False"
	I0728 15:52:47.375364   29417 pod_ready.go:81] duration metric: took 4m0.003710612s waiting for pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace to be "Ready" ...
	E0728 15:52:47.375409   29417 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-b525p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0728 15:52:47.375443   29417 pod_ready.go:38] duration metric: took 4m15.550586171s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:52:47.375472   29417 kubeadm.go:630] restartCluster took 4m25.45481034s
	W0728 15:52:47.375941   29417 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0728 15:52:47.375998   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0728 15:52:49.817713   29417 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.441734678s)
	I0728 15:52:49.817775   29417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:52:49.827120   29417 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:52:49.834217   29417 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 15:52:49.834261   29417 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:52:49.841453   29417 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 15:52:49.841479   29417 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 15:52:50.124147   29417 out.go:204]   - Generating certificates and keys ...
	I0728 15:52:50.778545   29417 out.go:204]   - Booting up control plane ...
	I0728 15:52:56.833665   29417 out.go:204]   - Configuring RBAC rules ...
	I0728 15:52:57.217898   29417 cni.go:95] Creating CNI manager for ""
	I0728 15:52:57.217910   29417 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:52:57.217931   29417 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 15:52:57.218029   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:57.218044   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551 minikube.k8s.io/name=embed-certs-20220728154707-12923 minikube.k8s.io/updated_at=2022_07_28T15_52_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:57.228119   29417 ops.go:34] apiserver oom_adj: -16
	I0728 15:52:57.347112   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:57.924017   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:58.422483   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:58.922247   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:59.422049   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:52:59.922434   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:00.421887   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:00.922294   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:01.422091   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:01.922001   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:02.422182   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:02.921905   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:03.422209   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:03.922915   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:04.422395   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:04.921979   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:05.422349   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:05.923128   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:06.421972   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:06.921905   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:07.423616   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:07.923778   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:08.422276   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:08.922086   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:09.421795   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:09.922089   29417 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 15:53:09.976078   29417 kubeadm.go:1045] duration metric: took 12.758317756s to wait for elevateKubeSystemPrivileges.
	I0728 15:53:09.976125   29417 kubeadm.go:397] StartCluster complete in 4m48.092013069s
	I0728 15:53:09.976150   29417 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:53:09.976224   29417 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:53:09.976957   29417 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:53:10.492087   29417 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220728154707-12923" rescaled to 1
	I0728 15:53:10.492129   29417 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 15:53:10.492148   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 15:53:10.492168   29417 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0728 15:53:10.515270   29417 out.go:177] * Verifying Kubernetes components...
	I0728 15:53:10.492298   29417 config.go:178] Loaded profile config "embed-certs-20220728154707-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:53:10.515348   29417 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220728154707-12923"
	I0728 15:53:10.515351   29417 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220728154707-12923"
	I0728 15:53:10.515351   29417 addons.go:65] Setting dashboard=true in profile "embed-certs-20220728154707-12923"
	I0728 15:53:10.515354   29417 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220728154707-12923"
	I0728 15:53:10.572172   29417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:53:10.572175   29417 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220728154707-12923"
	I0728 15:53:10.572176   29417 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220728154707-12923"
	W0728 15:53:10.572185   29417 addons.go:162] addon metrics-server should already be in state true
	I0728 15:53:10.572188   29417 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220728154707-12923"
	W0728 15:53:10.572191   29417 addons.go:162] addon storage-provisioner should already be in state true
	I0728 15:53:10.572199   29417 addons.go:153] Setting addon dashboard=true in "embed-certs-20220728154707-12923"
	W0728 15:53:10.572224   29417 addons.go:162] addon dashboard should already be in state true
	I0728 15:53:10.572234   29417 host.go:66] Checking if "embed-certs-20220728154707-12923" exists ...
	I0728 15:53:10.572234   29417 host.go:66] Checking if "embed-certs-20220728154707-12923" exists ...
	I0728 15:53:10.572263   29417 host.go:66] Checking if "embed-certs-20220728154707-12923" exists ...
	I0728 15:53:10.572531   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:53:10.572687   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:53:10.573316   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:53:10.573324   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:53:10.581651   29417 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0728 15:53:10.601502   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:53:10.693262   29417 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0728 15:53:10.701418   29417 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220728154707-12923"
	I0728 15:53:10.715188   29417 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0728 15:53:10.752047   29417 addons.go:162] addon default-storageclass should already be in state true
	I0728 15:53:10.773336   29417 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0728 15:53:10.794538   29417 host.go:66] Checking if "embed-certs-20220728154707-12923" exists ...
	I0728 15:53:10.794655   29417 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:53:10.832023   29417 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0728 15:53:10.869363   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 15:53:10.869383   29417 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0728 15:53:10.906265   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0728 15:53:10.869758   29417 cli_runner.go:164] Run: docker container inspect embed-certs-20220728154707-12923 --format={{.State.Status}}
	I0728 15:53:10.885942   29417 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220728154707-12923" to be "Ready" ...
	I0728 15:53:10.906309   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0728 15:53:10.906320   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0728 15:53:10.906348   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:53:10.906364   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:53:10.906384   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:53:10.925260   29417 node_ready.go:49] node "embed-certs-20220728154707-12923" has status "Ready":"True"
	I0728 15:53:10.925280   29417 node_ready.go:38] duration metric: took 18.955735ms waiting for node "embed-certs-20220728154707-12923" to be "Ready" ...
	I0728 15:53:10.925288   29417 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:53:10.939829   29417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vlhnt" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:11.006438   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:53:11.007876   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:53:11.008043   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:53:11.009282   29417 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 15:53:11.009297   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 15:53:11.009379   29417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220728154707-12923
	I0728 15:53:11.080509   29417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59134 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/embed-certs-20220728154707-12923/id_rsa Username:docker}
	I0728 15:53:11.222820   29417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 15:53:11.228693   29417 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0728 15:53:11.228705   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0728 15:53:11.319170   29417 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0728 15:53:11.319191   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0728 15:53:11.324514   29417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 15:53:11.337325   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0728 15:53:11.337338   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0728 15:53:11.414290   29417 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 15:53:11.414306   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0728 15:53:11.422951   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0728 15:53:11.422964   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0728 15:53:11.503900   29417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 15:53:11.519764   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0728 15:53:11.519786   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0728 15:53:11.600178   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0728 15:53:11.600192   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0728 15:53:11.703236   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0728 15:53:11.703255   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0728 15:53:11.843715   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0728 15:53:11.843732   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0728 15:53:11.861278   29417 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.279614695s)
	I0728 15:53:11.861298   29417 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0728 15:53:11.913809   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0728 15:53:11.913829   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0728 15:53:12.013028   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0728 15:53:12.013041   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0728 15:53:12.100678   29417 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 15:53:12.100724   29417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0728 15:53:12.134931   29417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 15:53:12.316912   29417 addons.go:383] Verifying addon metrics-server=true in "embed-certs-20220728154707-12923"
	I0728 15:53:13.004539   29417 pod_ready.go:102] pod "coredns-6d4b75cb6d-vlhnt" in "kube-system" namespace has status "Ready":"False"
	I0728 15:53:13.038577   29417 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0728 15:53:13.096070   29417 addons.go:414] enableAddons completed in 2.603951363s
	I0728 15:53:14.464520   29417 pod_ready.go:92] pod "coredns-6d4b75cb6d-vlhnt" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:14.464533   29417 pod_ready.go:81] duration metric: took 3.524743781s waiting for pod "coredns-6d4b75cb6d-vlhnt" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.464540   29417 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-wnn6n" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.469514   29417 pod_ready.go:92] pod "coredns-6d4b75cb6d-wnn6n" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:14.469523   29417 pod_ready.go:81] duration metric: took 4.979226ms waiting for pod "coredns-6d4b75cb6d-wnn6n" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.469529   29417 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.473830   29417 pod_ready.go:92] pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:14.473838   29417 pod_ready.go:81] duration metric: took 4.304872ms waiting for pod "etcd-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.473846   29417 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.478000   29417 pod_ready.go:92] pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:14.478008   29417 pod_ready.go:81] duration metric: took 4.157902ms waiting for pod "kube-apiserver-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.478013   29417 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.481921   29417 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:14.481928   29417 pod_ready.go:81] duration metric: took 3.910555ms waiting for pod "kube-controller-manager-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.481934   29417 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h9xkx" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.856785   29417 pod_ready.go:92] pod "kube-proxy-h9xkx" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:14.856795   29417 pod_ready.go:81] duration metric: took 374.863514ms waiting for pod "kube-proxy-h9xkx" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:14.856802   29417 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:15.256337   29417 pod_ready.go:92] pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:53:15.256346   29417 pod_ready.go:81] duration metric: took 399.54609ms waiting for pod "kube-scheduler-embed-certs-20220728154707-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:53:15.256352   29417 pod_ready.go:38] duration metric: took 4.331114559s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:53:15.256372   29417 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:53:15.256416   29417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:53:15.269024   29417 api_server.go:71] duration metric: took 4.77695004s to wait for apiserver process to appear ...
	I0728 15:53:15.269042   29417 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:53:15.269050   29417 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59133/healthz ...
	I0728 15:53:15.274209   29417 api_server.go:266] https://127.0.0.1:59133/healthz returned 200:
	ok
	I0728 15:53:15.275350   29417 api_server.go:140] control plane version: v1.24.3
	I0728 15:53:15.275359   29417 api_server.go:130] duration metric: took 6.31257ms to wait for apiserver health ...
	I0728 15:53:15.275364   29417 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:53:15.457117   29417 system_pods.go:59] 9 kube-system pods found
	I0728 15:53:15.457130   29417 system_pods.go:61] "coredns-6d4b75cb6d-vlhnt" [15178362-740f-412b-847e-671a674e7a79] Running
	I0728 15:53:15.457134   29417 system_pods.go:61] "coredns-6d4b75cb6d-wnn6n" [054ea91e-c444-438a-8f47-a758e4bba1ca] Running
	I0728 15:53:15.457137   29417 system_pods.go:61] "etcd-embed-certs-20220728154707-12923" [053ff81a-84c4-4823-b51f-8bdb7fb9b3e6] Running
	I0728 15:53:15.457146   29417 system_pods.go:61] "kube-apiserver-embed-certs-20220728154707-12923" [b07a2a80-df11-40ee-9227-b5704fd89d5b] Running
	I0728 15:53:15.457155   29417 system_pods.go:61] "kube-controller-manager-embed-certs-20220728154707-12923" [bb4e6302-2345-4f3a-9767-5d1ecbf95ca4] Running
	I0728 15:53:15.457159   29417 system_pods.go:61] "kube-proxy-h9xkx" [1cda32a7-99a3-47b3-b2e7-62ab885dc4d8] Running
	I0728 15:53:15.457163   29417 system_pods.go:61] "kube-scheduler-embed-certs-20220728154707-12923" [04849f66-86dc-44e9-b327-3ed02793e728] Running
	I0728 15:53:15.457168   29417 system_pods.go:61] "metrics-server-5c6f97fb75-c8z6p" [4971ee0b-7c98-4351-b8af-e8c7ac2c0605] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:53:15.457174   29417 system_pods.go:61] "storage-provisioner" [ab617b8b-a02e-4305-a631-7930f5b99a8e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0728 15:53:15.457179   29417 system_pods.go:74] duration metric: took 181.814843ms to wait for pod list to return data ...
	I0728 15:53:15.457186   29417 default_sa.go:34] waiting for default service account to be created ...
	I0728 15:53:15.654167   29417 default_sa.go:45] found service account: "default"
	I0728 15:53:15.654178   29417 default_sa.go:55] duration metric: took 196.99155ms for default service account to be created ...
	I0728 15:53:15.654183   29417 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 15:53:15.858047   29417 system_pods.go:86] 9 kube-system pods found
	I0728 15:53:15.858061   29417 system_pods.go:89] "coredns-6d4b75cb6d-vlhnt" [15178362-740f-412b-847e-671a674e7a79] Running
	I0728 15:53:15.858065   29417 system_pods.go:89] "coredns-6d4b75cb6d-wnn6n" [054ea91e-c444-438a-8f47-a758e4bba1ca] Running
	I0728 15:53:15.858069   29417 system_pods.go:89] "etcd-embed-certs-20220728154707-12923" [053ff81a-84c4-4823-b51f-8bdb7fb9b3e6] Running
	I0728 15:53:15.858080   29417 system_pods.go:89] "kube-apiserver-embed-certs-20220728154707-12923" [b07a2a80-df11-40ee-9227-b5704fd89d5b] Running
	I0728 15:53:15.858084   29417 system_pods.go:89] "kube-controller-manager-embed-certs-20220728154707-12923" [bb4e6302-2345-4f3a-9767-5d1ecbf95ca4] Running
	I0728 15:53:15.858087   29417 system_pods.go:89] "kube-proxy-h9xkx" [1cda32a7-99a3-47b3-b2e7-62ab885dc4d8] Running
	I0728 15:53:15.858091   29417 system_pods.go:89] "kube-scheduler-embed-certs-20220728154707-12923" [04849f66-86dc-44e9-b327-3ed02793e728] Running
	I0728 15:53:15.858097   29417 system_pods.go:89] "metrics-server-5c6f97fb75-c8z6p" [4971ee0b-7c98-4351-b8af-e8c7ac2c0605] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:53:15.858102   29417 system_pods.go:89] "storage-provisioner" [ab617b8b-a02e-4305-a631-7930f5b99a8e] Running
	I0728 15:53:15.858106   29417 system_pods.go:126] duration metric: took 203.923938ms to wait for k8s-apps to be running ...
	I0728 15:53:15.858113   29417 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 15:53:15.858161   29417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:53:15.868022   29417 system_svc.go:56] duration metric: took 9.903779ms WaitForService to wait for kubelet.
	I0728 15:53:15.868035   29417 kubeadm.go:572] duration metric: took 5.375974856s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0728 15:53:15.868053   29417 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:53:16.055052   29417 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:53:16.055065   29417 node_conditions.go:123] node cpu capacity is 6
	I0728 15:53:16.055072   29417 node_conditions.go:105] duration metric: took 187.017671ms to run NodePressure ...
	I0728 15:53:16.055080   29417 start.go:216] waiting for startup goroutines ...
	I0728 15:53:16.085957   29417 start.go:506] kubectl: 1.24.2, cluster: 1.24.3 (minor skew: 0)
	I0728 15:53:16.110027   29417 out.go:177] * Done! kubectl is now configured to use "embed-certs-20220728154707-12923" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 22:48:18 UTC, end at Thu 2022-07-28 22:54:13 UTC. --
	Jul 28 22:52:48 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:48.858423240Z" level=info msg="ignoring event" container=163f6b16d266ded21aeae3ce3c700646a3f0df13a778bfea773b9053f91bad6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:48 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:48.925160553Z" level=info msg="ignoring event" container=e92de4c51671db7e33e76bf61580ed899b45cfd93da5414c3f9abc31d97a1ada module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:48 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:48.994461908Z" level=info msg="ignoring event" container=c9aba04e2f7482486d40fe0e6cd8b40e8cd555a4f15f10f4fd127d6ffa30d587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:49 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:49.076787967Z" level=info msg="ignoring event" container=ce4317009ff513d1a96c8da19af227208be8ffaa3aefe9a9eacea96ca7218bee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:49 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:49.151539750Z" level=info msg="ignoring event" container=13eff5c7dd5fdd9375d77c15f0cc1aefbacc1df3640c4cd445eec54a41145ecb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:49 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:49.278077109Z" level=info msg="ignoring event" container=c9abe0e4490b96b347ecc77858bfef579692de509a81136b880e33fa818d233b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:49 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:49.344884344Z" level=info msg="ignoring event" container=eecea093929f42328a8b9671a7d98a23e2685089a47f10fa1d670a8d1a4f5413 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:52:49 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:52:49.436510959Z" level=info msg="ignoring event" container=de2e7a9bcf7aff0d8880411a0bf89f0fd7f9c2950b1123f795fe0a5ee5934677 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:53:13 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:13.378953526Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:53:13 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:13.379013007Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:53:13 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:13.380220413Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:53:14 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:14.380050896Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 28 22:53:16 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:16.819685320Z" level=info msg="ignoring event" container=bd3261dbe9b41ce708a4745422851c03e95cdfbdceddd57a122830ebc43bd507 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:53:16 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:16.924714588Z" level=info msg="ignoring event" container=40023e75a887e0956796f79cbd51c5f7c9daef960d9f1d7a300f1efab22647cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:53:20 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:20.107278440Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 28 22:53:20 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:20.404106472Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 28 22:53:23 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:23.616644149Z" level=info msg="ignoring event" container=fcf362a6290929a14566d6fe437cac6eb9139cbfefcf9f01cd46b08440d027d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:53:23 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:23.944143588Z" level=info msg="ignoring event" container=62ed001fa5326b9de1c551d9fffde74d4073cf2f4b4a51319ea7e12b16b8ac57 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:53:26 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:26.175377392Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:53:26 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:26.175871357Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:53:26 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:53:26.177051672Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:54:10 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:54:10.322993151Z" level=info msg="ignoring event" container=0edb881f79a74cb60fe6109ac947b2fbcbe54ad26dde1ef26b2cae19fd21bc32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 22:54:10 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:54:10.645334554Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:54:10 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:54:10.645400726Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 22:54:10 embed-certs-20220728154707-12923 dockerd[497]: time="2022-07-28T22:54:10.679629953Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	0edb881f79a74       a90209bb39e3d                                                                                    4 seconds ago        Exited              dashboard-metrics-scraper   2                   e2b0d6225baa7
	9c10e21c6fa3b       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   54 seconds ago       Running             kubernetes-dashboard        0                   4e59d3aaf71e4
	270dbea174a40       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   6852a67953e7f
	eb2450e94dc8f       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   2d711d8816247
	3f91d11d220a4       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   1012cd2650f09
	667a33b70f6db       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   e169a10e3ae9f
	d34e7a0e7b800       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   d0fe55c90c41b
	a4c3594d7a36c       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   b4d0babff3ed6
	f1d5808f4a8c8       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   1e95f71652260
	
	* 
	* ==> coredns [eb2450e94dc8] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220728154707-12923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220728154707-12923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
	                    minikube.k8s.io/name=embed-certs-20220728154707-12923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_28T15_52_57_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 22:52:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220728154707-12923
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 22:54:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Jul 2022 22:54:06 +0000   Thu, 28 Jul 2022 22:52:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Jul 2022 22:54:06 +0000   Thu, 28 Jul 2022 22:52:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Jul 2022 22:54:06 +0000   Thu, 28 Jul 2022 22:52:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Jul 2022 22:54:06 +0000   Thu, 28 Jul 2022 22:52:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    embed-certs-20220728154707-12923
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                a360caa3-de3e-4d13-bba8-b88d7ca01c92
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-vlhnt                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     63s
	  kube-system                 etcd-embed-certs-20220728154707-12923                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kube-apiserver-embed-certs-20220728154707-12923             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-embed-certs-20220728154707-12923    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-h9xkx                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-scheduler-embed-certs-20220728154707-12923             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 metrics-server-5c6f97fb75-c8z6p                             100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         61s
	  kube-system                 storage-provisioner                                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-tvjpw                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-nj4nt                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 62s   kube-proxy       
	  Normal  Starting                 76s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  76s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  76s   kubelet          Node embed-certs-20220728154707-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s   kubelet          Node embed-certs-20220728154707-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s   kubelet          Node embed-certs-20220728154707-12923 status is now: NodeHasSufficientPID
	  Normal  NodeReady                76s   kubelet          Node embed-certs-20220728154707-12923 status is now: NodeReady
	  Normal  RegisteredNode           64s   node-controller  Node embed-certs-20220728154707-12923 event: Registered Node embed-certs-20220728154707-12923 in Controller
	  Normal  Starting                 7s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node embed-certs-20220728154707-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node embed-certs-20220728154707-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node embed-certs-20220728154707-12923 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s    kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [667a33b70f6d] <==
	* {"level":"info","ts":"2022-07-28T22:52:52.123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-07-28T22:52:52.123Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-07-28T22:52:52.125Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-28T22:52:52.125Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:52:52.125Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T22:52:52.125Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T22:52:52.125Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T22:52:52.370Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:embed-certs-20220728154707-12923 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T22:52:52.371Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T22:52:52.372Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T22:52:52.372Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:54:14 up  1:15,  0 users,  load average: 2.81, 1.32, 1.08
	Linux embed-certs-20220728154707-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [f1d5808f4a8c] <==
	* I0728 22:52:56.573192       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0728 22:52:57.026902       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0728 22:52:57.032340       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0728 22:52:57.041008       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0728 22:52:57.125101       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0728 22:53:10.201247       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0728 22:53:10.251515       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0728 22:53:11.743094       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0728 22:53:12.326946       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.99.168.195]
	I0728 22:53:12.965424       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.105.174.105]
	I0728 22:53:13.027595       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.110.229.163]
	W0728 22:53:13.232538       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 22:53:13.232577       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0728 22:53:13.232583       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 22:53:13.232599       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 22:53:13.234651       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0728 22:53:13.234679       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 22:54:13.189874       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 22:54:13.189914       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0728 22:54:13.189921       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 22:54:13.190974       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 22:54:13.191064       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0728 22:54:13.191091       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [d34e7a0e7b80] <==
	* I0728 22:53:10.452676       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-wnn6n"
	I0728 22:53:10.456557       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vlhnt"
	I0728 22:53:10.471543       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-wnn6n"
	I0728 22:53:12.145638       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0728 22:53:12.154590       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-c8z6p"
	I0728 22:53:12.824581       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0728 22:53:12.830994       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 22:53:12.833193       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0728 22:53:12.842398       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:53:12.844558       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:53:12.850626       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 22:53:12.851798       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:53:12.852304       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:53:12.862301       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 22:53:12.862315       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:53:12.862594       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 22:53:12.862680       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:53:12.866664       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:53:12.866791       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 22:53:12.870845       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 22:53:12.870977       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 22:53:12.922483       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-tvjpw"
	I0728 22:53:12.923592       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-nj4nt"
	E0728 22:54:06.422531       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0728 22:54:06.477627       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [3f91d11d220a] <==
	* I0728 22:53:11.559166       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0728 22:53:11.559279       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0728 22:53:11.559363       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 22:53:11.732009       1 server_others.go:206] "Using iptables Proxier"
	I0728 22:53:11.732037       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 22:53:11.732043       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 22:53:11.732054       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 22:53:11.732086       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:53:11.733039       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 22:53:11.737906       1 server.go:661] "Version info" version="v1.24.3"
	I0728 22:53:11.737942       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 22:53:11.738290       1 config.go:317] "Starting service config controller"
	I0728 22:53:11.738326       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 22:53:11.738399       1 config.go:444] "Starting node config controller"
	I0728 22:53:11.738407       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 22:53:11.738464       1 config.go:226] "Starting endpoint slice config controller"
	I0728 22:53:11.738470       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 22:53:11.838877       1 shared_informer.go:262] Caches are synced for node config
	I0728 22:53:11.838921       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 22:53:11.838924       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [a4c3594d7a36] <==
	* W0728 22:52:54.479794       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0728 22:52:54.479821       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0728 22:52:54.479760       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0728 22:52:54.479950       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0728 22:52:54.479979       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0728 22:52:54.480334       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0728 22:52:54.479901       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0728 22:52:54.480343       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0728 22:52:55.306677       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0728 22:52:55.306728       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0728 22:52:55.346997       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0728 22:52:55.347047       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0728 22:52:55.374495       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0728 22:52:55.374532       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0728 22:52:55.489410       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0728 22:52:55.489448       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0728 22:52:55.502639       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0728 22:52:55.502675       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0728 22:52:55.525592       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0728 22:52:55.525610       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0728 22:52:55.528392       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0728 22:52:55.528424       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0728 22:52:55.684282       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0728 22:52:55.684318       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0728 22:52:58.173751       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 22:48:18 UTC, end at Thu 2022-07-28 22:54:14 UTC. --
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.880866    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/4971ee0b-7c98-4351-b8af-e8c7ac2c0605-tmp-dir\") pod \"metrics-server-5c6f97fb75-c8z6p\" (UID: \"4971ee0b-7c98-4351-b8af-e8c7ac2c0605\") " pod="kube-system/metrics-server-5c6f97fb75-c8z6p"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.880884    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwdg\" (UniqueName: \"kubernetes.io/projected/4971ee0b-7c98-4351-b8af-e8c7ac2c0605-kube-api-access-zkwdg\") pod \"metrics-server-5c6f97fb75-c8z6p\" (UID: \"4971ee0b-7c98-4351-b8af-e8c7ac2c0605\") " pod="kube-system/metrics-server-5c6f97fb75-c8z6p"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.880900    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvf57\" (UniqueName: \"kubernetes.io/projected/ab617b8b-a02e-4305-a631-7930f5b99a8e-kube-api-access-dvf57\") pod \"storage-provisioner\" (UID: \"ab617b8b-a02e-4305-a631-7930f5b99a8e\") " pod="kube-system/storage-provisioner"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.880916    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cda32a7-99a3-47b3-b2e7-62ab885dc4d8-xtables-lock\") pod \"kube-proxy-h9xkx\" (UID: \"1cda32a7-99a3-47b3-b2e7-62ab885dc4d8\") " pod="kube-system/kube-proxy-h9xkx"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.880986    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cda32a7-99a3-47b3-b2e7-62ab885dc4d8-lib-modules\") pod \"kube-proxy-h9xkx\" (UID: \"1cda32a7-99a3-47b3-b2e7-62ab885dc4d8\") " pod="kube-system/kube-proxy-h9xkx"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881011    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpq86\" (UniqueName: \"kubernetes.io/projected/15178362-740f-412b-847e-671a674e7a79-kube-api-access-fpq86\") pod \"coredns-6d4b75cb6d-vlhnt\" (UID: \"15178362-740f-412b-847e-671a674e7a79\") " pod="kube-system/coredns-6d4b75cb6d-vlhnt"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881051    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ab0cd45a-b372-4bd8-bc60-9d8a65175c7c-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-tvjpw\" (UID: \"ab0cd45a-b372-4bd8-bc60-9d8a65175c7c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tvjpw"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881098    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4ebca962-177f-4a70-9d72-b89712a84628-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-nj4nt\" (UID: \"4ebca962-177f-4a70-9d72-b89712a84628\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-nj4nt"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881118    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlk88\" (UniqueName: \"kubernetes.io/projected/ab0cd45a-b372-4bd8-bc60-9d8a65175c7c-kube-api-access-dlk88\") pod \"dashboard-metrics-scraper-dffd48c4c-tvjpw\" (UID: \"ab0cd45a-b372-4bd8-bc60-9d8a65175c7c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tvjpw"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881136    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1cda32a7-99a3-47b3-b2e7-62ab885dc4d8-kube-proxy\") pod \"kube-proxy-h9xkx\" (UID: \"1cda32a7-99a3-47b3-b2e7-62ab885dc4d8\") " pod="kube-system/kube-proxy-h9xkx"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881150    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbrsn\" (UniqueName: \"kubernetes.io/projected/4ebca962-177f-4a70-9d72-b89712a84628-kube-api-access-dbrsn\") pod \"kubernetes-dashboard-5fd5574d9f-nj4nt\" (UID: \"4ebca962-177f-4a70-9d72-b89712a84628\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-nj4nt"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881167    9729 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15178362-740f-412b-847e-671a674e7a79-config-volume\") pod \"coredns-6d4b75cb6d-vlhnt\" (UID: \"15178362-740f-412b-847e-671a674e7a79\") " pod="kube-system/coredns-6d4b75cb6d-vlhnt"
	Jul 28 22:54:07 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:07.881177    9729 reconciler.go:157] "Reconciler: start to sync state"
	Jul 28 22:54:09 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:09.037493    9729 request.go:601] Waited for 1.150296862s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Jul 28 22:54:09 embed-certs-20220728154707-12923 kubelet[9729]: E0728 22:54:09.100937    9729 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-embed-certs-20220728154707-12923\" already exists" pod="kube-system/kube-scheduler-embed-certs-20220728154707-12923"
	Jul 28 22:54:09 embed-certs-20220728154707-12923 kubelet[9729]: E0728 22:54:09.265816    9729 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-embed-certs-20220728154707-12923\" already exists" pod="kube-system/kube-apiserver-embed-certs-20220728154707-12923"
	Jul 28 22:54:09 embed-certs-20220728154707-12923 kubelet[9729]: E0728 22:54:09.482107    9729 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-embed-certs-20220728154707-12923\" already exists" pod="kube-system/etcd-embed-certs-20220728154707-12923"
	Jul 28 22:54:09 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:09.941393    9729 scope.go:110] "RemoveContainer" containerID="62ed001fa5326b9de1c551d9fffde74d4073cf2f4b4a51319ea7e12b16b8ac57"
	Jul 28 22:54:10 embed-certs-20220728154707-12923 kubelet[9729]: E0728 22:54:10.680286    9729 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 28 22:54:10 embed-certs-20220728154707-12923 kubelet[9729]: E0728 22:54:10.680424    9729 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 28 22:54:10 embed-certs-20220728154707-12923 kubelet[9729]: E0728 22:54:10.680564    9729 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zkwdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeH
andler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices
:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-c8z6p_kube-system(4971ee0b-7c98-4351-b8af-e8c7ac2c0605): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 28 22:54:10 embed-certs-20220728154707-12923 kubelet[9729]: E0728 22:54:10.680627    9729 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-c8z6p" podUID=4971ee0b-7c98-4351-b8af-e8c7ac2c0605
	Jul 28 22:54:10 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:10.910374    9729 scope.go:110] "RemoveContainer" containerID="62ed001fa5326b9de1c551d9fffde74d4073cf2f4b4a51319ea7e12b16b8ac57"
	Jul 28 22:54:10 embed-certs-20220728154707-12923 kubelet[9729]: I0728 22:54:10.910640    9729 scope.go:110] "RemoveContainer" containerID="0edb881f79a74cb60fe6109ac947b2fbcbe54ad26dde1ef26b2cae19fd21bc32"
	Jul 28 22:54:10 embed-certs-20220728154707-12923 kubelet[9729]: E0728 22:54:10.910823    9729 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-tvjpw_kubernetes-dashboard(ab0cd45a-b372-4bd8-bc60-9d8a65175c7c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-tvjpw" podUID=ab0cd45a-b372-4bd8-bc60-9d8a65175c7c
	
	* 
	* ==> kubernetes-dashboard [9c10e21c6fa3] <==
	* 2022/07/28 22:53:19 Starting overwatch
	2022/07/28 22:53:19 Using namespace: kubernetes-dashboard
	2022/07/28 22:53:19 Using in-cluster config to connect to apiserver
	2022/07/28 22:53:19 Using secret token for csrf signing
	2022/07/28 22:53:19 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/28 22:53:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/28 22:53:19 Successful initial request to the apiserver, version: v1.24.3
	2022/07/28 22:53:19 Generating JWE encryption key
	2022/07/28 22:53:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/28 22:53:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/28 22:53:19 Initializing JWE encryption key from synchronized object
	2022/07/28 22:53:19 Creating in-cluster Sidecar client
	2022/07/28 22:53:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/28 22:53:19 Serving insecurely on HTTP port: 9090
	2022/07/28 22:54:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [270dbea174a4] <==
	* I0728 22:53:13.228473       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 22:53:13.237230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 22:53:13.237275       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 22:53:13.242360       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 22:53:13.242466       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20220728154707-12923_500a6a72-89ba-48ed-b2d0-5c509346f99f!
	I0728 22:53:13.242677       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"53f95cd3-6964-4e7e-aa1a-4a3695ff925a", APIVersion:"v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20220728154707-12923_500a6a72-89ba-48ed-b2d0-5c509346f99f became leader
	I0728 22:53:13.342924       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20220728154707-12923_500a6a72-89ba-48ed-b2d0-5c509346f99f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220728154707-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-c8z6p
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220728154707-12923 describe pod metrics-server-5c6f97fb75-c8z6p
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220728154707-12923 describe pod metrics-server-5c6f97fb75-c8z6p: exit status 1 (294.703807ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-c8z6p" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220728154707-12923 describe pod metrics-server-5c6f97fb75-c8z6p: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (43.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (43.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220728155420-12923 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923: exit status 2 (16.086206551s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923
E0728 16:01:07.812058   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923: exit status 2 (16.087079197s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220728155420-12923 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220728155420-12923
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220728155420-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d8d7f92f3558c28362c6b5ddb38dcf96c632b6886b3e774ea6e79d70f629962d",
	        "Created": "2022-07-28T22:54:26.652227115Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291298,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:55:31.996485314Z",
	            "FinishedAt": "2022-07-28T22:55:30.055048022Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/d8d7f92f3558c28362c6b5ddb38dcf96c632b6886b3e774ea6e79d70f629962d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d8d7f92f3558c28362c6b5ddb38dcf96c632b6886b3e774ea6e79d70f629962d/hostname",
	        "HostsPath": "/var/lib/docker/containers/d8d7f92f3558c28362c6b5ddb38dcf96c632b6886b3e774ea6e79d70f629962d/hosts",
	        "LogPath": "/var/lib/docker/containers/d8d7f92f3558c28362c6b5ddb38dcf96c632b6886b3e774ea6e79d70f629962d/d8d7f92f3558c28362c6b5ddb38dcf96c632b6886b3e774ea6e79d70f629962d-json.log",
	        "Name": "/default-k8s-different-port-20220728155420-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220728155420-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220728155420-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/19cf15522acce53c75e401f40b025a238bf85a62355c17678c2b59e6f96d73c1-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/19cf15522acce53c75e401f40b025a238bf85a62355c17678c2b59e6f96d73c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/19cf15522acce53c75e401f40b025a238bf85a62355c17678c2b59e6f96d73c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/19cf15522acce53c75e401f40b025a238bf85a62355c17678c2b59e6f96d73c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220728155420-12923",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220728155420-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220728155420-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220728155420-12923",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220728155420-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "341f7fa70f2c12d2fc7ff1a8c5a50fbf954d395bb7cfece6bf2947e9163ff9f2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59511"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59512"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59513"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59514"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59515"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/341f7fa70f2c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220728155420-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d8d7f92f3558",
	                        "default-k8s-different-port-20220728155420-12923"
	                    ],
	                    "NetworkID": "a61796860495adbfacdefc21f813297bd80016d2dc54d641ab8799dccf8c786d",
	                    "EndpointID": "63d545a57fb144ddd794307d9258933a2fef916ed7e08004f84f000eafb63a4b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220728155420-12923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220728155420-12923 logs -n 25: (2.85591565s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | old-k8s-version-20220728153807-12923            | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220728155419-12923      | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | disable-driver-mounts-20220728155419-12923        |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:00 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:00 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:01 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 15:55:30
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 15:55:30.758749   30316 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:55:30.758949   30316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:55:30.758955   30316 out.go:309] Setting ErrFile to fd 2...
	I0728 15:55:30.758959   30316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:55:30.759060   30316 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:55:30.759520   30316 out.go:303] Setting JSON to false
	I0728 15:55:30.774489   30316 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9972,"bootTime":1659038958,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:55:30.774588   30316 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:55:30.795830   30316 out.go:177] * [default-k8s-different-port-20220728155420-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:55:30.839027   30316 notify.go:193] Checking for updates...
	I0728 15:55:30.860766   30316 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:55:30.882054   30316 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:55:30.903796   30316 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:55:30.924802   30316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:55:30.946060   30316 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:55:30.968665   30316 config.go:178] Loaded profile config "default-k8s-different-port-20220728155420-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:55:30.969316   30316 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:55:31.036834   30316 docker.go:137] docker version: linux-20.10.17
	I0728 15:55:31.037003   30316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:55:31.170708   30316 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:55:31.104106473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:55:31.213459   30316 out.go:177] * Using the docker driver based on existing profile
	I0728 15:55:31.235399   30316 start.go:284] selected driver: docker
	I0728 15:55:31.235424   30316 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220728155420-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port
-20220728155420-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:55:31.235546   30316 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:55:31.238871   30316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:55:31.370748   30316 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:55:31.306721001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:55:31.370921   30316 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:55:31.370940   30316 cni.go:95] Creating CNI manager for ""
	I0728 15:55:31.370951   30316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:55:31.370961   30316 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220728155420-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728155420-12923 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:55:31.392647   30316 out.go:177] * Starting control plane node default-k8s-different-port-20220728155420-12923 in cluster default-k8s-different-port-20220728155420-12923
	I0728 15:55:31.414823   30316 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:55:31.436734   30316 out.go:177] * Pulling base image ...
	I0728 15:55:31.478779   30316 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:55:31.478823   30316 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:55:31.478857   30316 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 15:55:31.478885   30316 cache.go:57] Caching tarball of preloaded images
	I0728 15:55:31.479127   30316 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:55:31.479764   30316 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 15:55:31.480270   30316 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/config.json ...
	I0728 15:55:31.541959   30316 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:55:31.541977   30316 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:55:31.541987   30316 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:55:31.542029   30316 start.go:370] acquiring machines lock for default-k8s-different-port-20220728155420-12923: {Name:mk0e822f9f2b9adffe1c022a5e24460488a5334a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:55:31.542121   30316 start.go:374] acquired machines lock for "default-k8s-different-port-20220728155420-12923" in 68.722µs
	I0728 15:55:31.542144   30316 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:55:31.542153   30316 fix.go:55] fixHost starting: 
	I0728 15:55:31.542383   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 15:55:31.605720   30316 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220728155420-12923: state=Stopped err=<nil>
	W0728 15:55:31.605754   30316 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:55:31.627632   30316 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220728155420-12923" ...
	I0728 15:55:31.649486   30316 cli_runner.go:164] Run: docker start default-k8s-different-port-20220728155420-12923
	I0728 15:55:31.993066   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 15:55:32.060550   30316 kic.go:415] container "default-k8s-different-port-20220728155420-12923" state is running.
	I0728 15:55:32.061181   30316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.129615   30316 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/config.json ...
	I0728 15:55:32.130011   30316 machine.go:88] provisioning docker machine ...
	I0728 15:55:32.130035   30316 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220728155420-12923"
	I0728 15:55:32.130125   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.198326   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:32.198552   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:32.198567   30316 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220728155420-12923 && echo "default-k8s-different-port-20220728155420-12923" | sudo tee /etc/hostname
	I0728 15:55:32.326053   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220728155420-12923
	
	I0728 15:55:32.326140   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.391769   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:32.391946   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:32.391963   30316 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220728155420-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220728155420-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220728155420-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:55:32.512852   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:55:32.512877   30316 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:55:32.512901   30316 ubuntu.go:177] setting up certificates
	I0728 15:55:32.512910   30316 provision.go:83] configureAuth start
	I0728 15:55:32.512984   30316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.578514   30316 provision.go:138] copyHostCerts
	I0728 15:55:32.578594   30316 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:55:32.578603   30316 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:55:32.578690   30316 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:55:32.578899   30316 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:55:32.578917   30316 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:55:32.578982   30316 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:55:32.579122   30316 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:55:32.579139   30316 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:55:32.579198   30316 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:55:32.579317   30316 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220728155420-12923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220728155420-12923]
	I0728 15:55:32.674959   30316 provision.go:172] copyRemoteCerts
	I0728 15:55:32.675028   30316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:55:32.675080   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.741088   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:32.827517   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:55:32.845806   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0728 15:55:32.863725   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 15:55:32.880989   30316 provision.go:86] duration metric: configureAuth took 368.070725ms
	I0728 15:55:32.881002   30316 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:55:32.881150   30316 config.go:178] Loaded profile config "default-k8s-different-port-20220728155420-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:55:32.881205   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.946636   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:32.946805   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:32.946817   30316 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:55:33.067666   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:55:33.067682   30316 ubuntu.go:71] root file system type: overlay
	I0728 15:55:33.067838   30316 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:55:33.067911   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.131727   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:33.131899   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:33.131963   30316 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:55:33.261631   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:55:33.261710   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.325641   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:33.325808   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:33.325822   30316 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:55:33.449804   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:55:33.449827   30316 machine.go:91] provisioned docker machine in 1.319828847s
	I0728 15:55:33.449836   30316 start.go:307] post-start starting for "default-k8s-different-port-20220728155420-12923" (driver="docker")
	I0728 15:55:33.449845   30316 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:55:33.449924   30316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:55:33.449974   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.513907   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:33.601033   30316 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:55:33.604777   30316 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:55:33.604802   30316 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:55:33.604815   30316 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:55:33.604824   30316 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:55:33.604833   30316 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:55:33.604935   30316 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:55:33.605075   30316 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:55:33.605215   30316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:55:33.611867   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:55:33.627853   30316 start.go:310] post-start completed in 178.00861ms
	I0728 15:55:33.627932   30316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:55:33.627977   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.695005   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:33.783324   30316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:55:33.787801   30316 fix.go:57] fixHost completed within 2.245684947s
	I0728 15:55:33.787824   30316 start.go:82] releasing machines lock for "default-k8s-different-port-20220728155420-12923", held for 2.245723265s
	I0728 15:55:33.787920   30316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.853332   30316 ssh_runner.go:195] Run: systemctl --version
	I0728 15:55:33.853337   30316 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:55:33.853402   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.853416   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.920417   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:33.920583   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:34.004995   30316 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:55:34.200488   30316 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:55:34.200552   30316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:55:34.212709   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:55:34.225673   30316 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:55:34.294929   30316 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:55:34.354645   30316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:55:34.418257   30316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:55:34.646963   30316 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 15:55:34.712035   30316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:55:34.777443   30316 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 15:55:34.786754   30316 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 15:55:34.786821   30316 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 15:55:34.790552   30316 start.go:471] Will wait 60s for crictl version
	I0728 15:55:34.790591   30316 ssh_runner.go:195] Run: sudo crictl version
	I0728 15:55:34.891481   30316 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 15:55:34.891547   30316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:55:34.925151   30316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:55:35.004758   30316 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 15:55:35.004838   30316 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220728155420-12923 dig +short host.docker.internal
	I0728 15:55:35.122418   30316 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:55:35.122524   30316 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:55:35.126670   30316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:55:35.135876   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:35.199963   30316 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:55:35.200048   30316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:55:35.230417   30316 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0728 15:55:35.230434   30316 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:55:35.230509   30316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:55:35.259780   30316 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0728 15:55:35.259801   30316 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:55:35.259876   30316 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:55:35.335289   30316 cni.go:95] Creating CNI manager for ""
	I0728 15:55:35.335301   30316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:55:35.335314   30316 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:55:35.335329   30316 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220728155420-12923 NodeName:default-k8s-different-port-20220728155420-12923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:55:35.335442   30316 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220728155420-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:55:35.335545   30316 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220728155420-12923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728155420-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0728 15:55:35.335608   30316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 15:55:35.342895   30316 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:55:35.342938   30316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:55:35.349853   30316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0728 15:55:35.361884   30316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:55:35.374675   30316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0728 15:55:35.386579   30316 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:55:35.390206   30316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:55:35.399217   30316 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923 for IP: 192.168.67.2
	I0728 15:55:35.399333   30316 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:55:35.399381   30316 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:55:35.399467   30316 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.key
	I0728 15:55:35.399524   30316 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/apiserver.key.c7fa3a9e
	I0728 15:55:35.399597   30316 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/proxy-client.key
	I0728 15:55:35.399795   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:55:35.399835   30316 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:55:35.399850   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:55:35.399884   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:55:35.399915   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:55:35.399943   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:55:35.400003   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:55:35.400535   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:55:35.416976   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 15:55:35.433119   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:55:35.449169   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 15:55:35.465944   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:55:35.482473   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:55:35.499106   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:55:35.515045   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:55:35.531222   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:55:35.548054   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:55:35.564648   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:55:35.581798   30316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:55:35.593965   30316 ssh_runner.go:195] Run: openssl version
	I0728 15:55:35.599527   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:55:35.607305   30316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:55:35.611358   30316 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:55:35.611403   30316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:55:35.616434   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:55:35.623673   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:55:35.631654   30316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:55:35.635527   30316 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:55:35.635576   30316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:55:35.640871   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:55:35.648249   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:55:35.655974   30316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:55:35.660049   30316 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:55:35.660093   30316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:55:35.665541   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:55:35.672843   30316 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220728155420-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728155420-1292
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:55:35.672943   30316 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:55:35.701097   30316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:55:35.708861   30316 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 15:55:35.708875   30316 kubeadm.go:626] restartCluster start
	I0728 15:55:35.708918   30316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 15:55:35.715675   30316 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:35.715731   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:35.792225   30316 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220728155420-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:55:35.792426   30316 kubeconfig.go:127] "default-k8s-different-port-20220728155420-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 15:55:35.792849   30316 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:55:35.794157   30316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 15:55:35.802225   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:35.802292   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:35.810745   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.012529   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.012639   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.023340   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.210864   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.211001   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.220628   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.411346   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.411538   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.421878   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.611205   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.611335   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.621648   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.812972   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.813075   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.823163   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.012867   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.013068   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.023408   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.212893   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.213035   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.223601   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.412922   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.413087   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.423314   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.612969   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.613078   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.623490   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.812353   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.812456   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.823336   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.010830   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.010916   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.019821   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.212896   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.213005   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.223210   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.410813   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.410939   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.420711   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.611654   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.611825   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.622545   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.811275   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.811362   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.821118   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.821127   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.821175   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.830270   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.830283   30316 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 15:55:38.830287   30316 kubeadm.go:1092] stopping kube-system containers ...
	I0728 15:55:38.830345   30316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:55:38.860913   30316 docker.go:443] Stopping containers: [7ec6c6148c24 fa1f81d73248 17fdf8663ffe 2fdb37a458e2 b90b24e0cb1f c98d3d66b2c3 d9963c325023 0204554478fa ab27dfcb31e5 49d7071af640 288680d90206 651e3092e073 ac40e1ec26cd 9ab0cfc84627 c85c92139dea 9a3c98b2d6cb]
	I0728 15:55:38.860989   30316 ssh_runner.go:195] Run: docker stop 7ec6c6148c24 fa1f81d73248 17fdf8663ffe 2fdb37a458e2 b90b24e0cb1f c98d3d66b2c3 d9963c325023 0204554478fa ab27dfcb31e5 49d7071af640 288680d90206 651e3092e073 ac40e1ec26cd 9ab0cfc84627 c85c92139dea 9a3c98b2d6cb
	I0728 15:55:38.890667   30316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 15:55:38.900785   30316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:55:38.907947   30316 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 28 22:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 28 22:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul 28 22:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 28 22:54 /etc/kubernetes/scheduler.conf
	
	I0728 15:55:38.908005   30316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0728 15:55:38.915307   30316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0728 15:55:38.922584   30316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0728 15:55:38.929622   30316 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.929673   30316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 15:55:38.936409   30316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0728 15:55:38.943336   30316 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.943378   30316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 15:55:38.950159   30316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:55:38.957430   30316 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 15:55:38.957440   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:39.001063   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:39.978959   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:40.156999   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:40.205624   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:40.261056   30316 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:55:40.261121   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:55:40.793843   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:55:41.293947   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:55:41.312913   30316 api_server.go:71] duration metric: took 1.051877628s to wait for apiserver process to appear ...
	I0728 15:55:41.312925   30316 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:55:41.312937   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:41.314088   30316 api_server.go:256] stopped: https://127.0.0.1:59515/healthz: Get "https://127.0.0.1:59515/healthz": EOF
	I0728 15:55:41.814307   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:44.109812   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 15:55:44.109835   30316 api_server.go:102] status: https://127.0.0.1:59515/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 15:55:44.314142   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:44.320716   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:55:44.320735   30316 api_server.go:102] status: https://127.0.0.1:59515/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:55:44.814131   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:44.819883   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:55:44.819900   30316 api_server.go:102] status: https://127.0.0.1:59515/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:55:45.316185   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:45.323137   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 200:
	ok
	I0728 15:55:45.329031   30316 api_server.go:140] control plane version: v1.24.3
	I0728 15:55:45.329042   30316 api_server.go:130] duration metric: took 4.016180474s to wait for apiserver health ...
	I0728 15:55:45.329048   30316 cni.go:95] Creating CNI manager for ""
	I0728 15:55:45.329052   30316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:55:45.329064   30316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:55:45.335966   30316 system_pods.go:59] 8 kube-system pods found
	I0728 15:55:45.335982   30316 system_pods.go:61] "coredns-6d4b75cb6d-p47tc" [097a4ddd-127a-4d76-9ef2-b31856680a61] Running
	I0728 15:55:45.335987   30316 system_pods.go:61] "etcd-default-k8s-different-port-20220728155420-12923" [b3af4c0d-6e4a-4de2-94a7-6f0e9804c43e] Running
	I0728 15:55:45.335992   30316 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220728155420-12923" [915ee0d1-3a30-4488-a8fc-a2fd46ff53dc] Running
	I0728 15:55:45.335999   30316 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220728155420-12923" [6697437d-e349-4688-91e7-6755001fc84c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 15:55:45.336004   30316 system_pods.go:61] "kube-proxy-nbrlj" [9c349fd7-0054-4e68-8374-3d4ccfb14b9d] Running
	I0728 15:55:45.336008   30316 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220728155420-12923" [f7033e33-c299-41ac-b929-b557f088bd55] Running
	I0728 15:55:45.336013   30316 system_pods.go:61] "metrics-server-5c6f97fb75-q8trj" [880b41fa-bdc2-4c65-b3c0-05c1487607d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:55:45.336019   30316 system_pods.go:61] "storage-provisioner" [18cb92a5-43f3-4ec9-aa95-d82651b00937] Running
	I0728 15:55:45.336023   30316 system_pods.go:74] duration metric: took 6.955342ms to wait for pod list to return data ...
	I0728 15:55:45.336030   30316 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:55:45.339315   30316 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:55:45.339330   30316 node_conditions.go:123] node cpu capacity is 6
	I0728 15:55:45.339339   30316 node_conditions.go:105] duration metric: took 3.304859ms to run NodePressure ...
	I0728 15:55:45.339355   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:45.458228   30316 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0728 15:55:45.462563   30316 kubeadm.go:777] kubelet initialised
	I0728 15:55:45.462573   30316 kubeadm.go:778] duration metric: took 4.332569ms waiting for restarted kubelet to initialise ...
	I0728 15:55:45.462580   30316 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:55:45.466883   30316 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-p47tc" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.471320   30316 pod_ready.go:92] pod "coredns-6d4b75cb6d-p47tc" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:45.471328   30316 pod_ready.go:81] duration metric: took 4.434775ms waiting for pod "coredns-6d4b75cb6d-p47tc" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.471334   30316 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.475851   30316 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:45.475859   30316 pod_ready.go:81] duration metric: took 4.521291ms waiting for pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.475864   30316 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.480433   30316 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:45.480440   30316 pod_ready.go:81] duration metric: took 4.572466ms waiting for pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.480448   30316 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:47.739926   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:49.740311   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:52.239842   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:54.741106   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:57.241016   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:59.242030   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:59.739349   30316 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:59.739360   30316 pod_ready.go:81] duration metric: took 14.259145114s waiting for pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.739367   30316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nbrlj" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.743616   30316 pod_ready.go:92] pod "kube-proxy-nbrlj" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:59.743625   30316 pod_ready.go:81] duration metric: took 4.252558ms waiting for pod "kube-proxy-nbrlj" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.743630   30316 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.747906   30316 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:59.747914   30316 pod_ready.go:81] duration metric: took 4.279711ms waiting for pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.747923   30316 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace to be "Ready" ...
	I0728 15:56:01.759363   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:04.256978   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:06.261199   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:08.757310   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:10.759539   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:13.256936   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:15.259376   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:17.260510   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:19.756798   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:21.758954   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:24.260530   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:26.758177   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:28.759402   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:31.258712   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:33.259127   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:35.259789   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:37.761287   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:40.257352   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:42.259524   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:44.260773   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:46.758827   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:48.760537   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:51.260493   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:53.756371   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:55.759549   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:58.257034   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:00.259021   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:02.267501   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:04.760386   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:07.257438   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:09.758070   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:12.259544   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:14.757376   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:16.758596   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:19.258413   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:21.258981   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:23.758572   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:26.257652   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:28.759470   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:30.759647   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:33.259035   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:35.756810   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:38.255923   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:40.256087   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:42.257292   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:44.257582   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:46.759216   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:49.256554   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:51.259323   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:53.757697   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:56.259190   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:58.756634   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:00.758211   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:03.256203   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:05.757714   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:08.255918   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:10.757402   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:13.256697   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:15.759116   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:18.254886   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:20.256061   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:22.256139   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:24.758663   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:27.256244   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:29.258507   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:31.755489   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:33.758645   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:36.256430   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:38.757168   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:40.757635   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:43.255776   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:45.255876   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:47.256269   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:49.256281   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:51.756592   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:54.255665   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:56.756610   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:59.255251   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:01.255321   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:03.257227   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:05.756051   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:08.255467   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:10.256148   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:12.755028   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:14.756329   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:16.756937   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:19.253884   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:21.255451   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:23.756009   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:25.757047   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:28.257085   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:30.755577   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:33.255299   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:35.257914   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:37.757572   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:40.256191   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:42.256681   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:44.754921   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:46.755973   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:48.756572   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:50.756662   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:53.253971   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:55.256336   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:57.754071   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:59.748516   30316 pod_ready.go:81] duration metric: took 4m0.004577503s waiting for pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace to be "Ready" ...
	E0728 15:59:59.748542   30316 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0728 15:59:59.748610   30316 pod_ready.go:38] duration metric: took 4m14.290256677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:59:59.748647   30316 kubeadm.go:630] restartCluster took 4m24.044163545s
	W0728 15:59:59.748768   30316 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0728 15:59:59.748795   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0728 16:00:02.104339   30316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.355568477s)
	I0728 16:00:02.104398   30316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 16:00:02.113958   30316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 16:00:02.121438   30316 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 16:00:02.121481   30316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 16:00:02.129052   30316 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 16:00:02.129078   30316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 16:00:02.411869   30316 out.go:204]   - Generating certificates and keys ...
	I0728 16:00:03.414591   30316 out.go:204]   - Booting up control plane ...
	I0728 16:00:10.524898   30316 out.go:204]   - Configuring RBAC rules ...
	I0728 16:00:10.902674   30316 cni.go:95] Creating CNI manager for ""
	I0728 16:00:10.902686   30316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 16:00:10.902703   30316 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 16:00:10.902792   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:10.902802   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551 minikube.k8s.io/name=default-k8s-different-port-20220728155420-12923 minikube.k8s.io/updated_at=2022_07_28T16_00_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:10.913574   30316 ops.go:34] apiserver oom_adj: -16
	I0728 16:00:11.094279   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:11.648292   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:12.148251   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:12.648564   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:13.148181   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:13.648060   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:14.149495   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:14.648613   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:15.148181   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:15.648223   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:16.147977   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:16.648059   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:17.148468   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:17.648414   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:18.148930   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:18.648014   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:19.148979   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:19.649861   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:20.148179   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:20.647929   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:21.148951   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:21.649324   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:22.149231   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:22.648610   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:23.149446   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:23.647962   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:23.790473   30316 kubeadm.go:1045] duration metric: took 12.887965506s to wait for elevateKubeSystemPrivileges.
	I0728 16:00:23.790492   30316 kubeadm.go:397] StartCluster complete in 4m48.122451346s
	I0728 16:00:23.790510   30316 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:00:23.790593   30316 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 16:00:23.791159   30316 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:00:24.307591   30316 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220728155420-12923" rescaled to 1
	I0728 16:00:24.307627   30316 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 16:00:24.307647   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 16:00:24.307667   30316 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0728 16:00:24.307873   30316 config.go:178] Loaded profile config "default-k8s-different-port-20220728155420-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 16:00:24.332263   30316 out.go:177] * Verifying Kubernetes components...
	I0728 16:00:24.332328   30316 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.332327   30316 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.354025   30316 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.354042   30316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 16:00:24.354056   30316 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.332334   30316 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220728155420-12923"
	W0728 16:00:24.354084   30316 addons.go:162] addon storage-provisioner should already be in state true
	I0728 16:00:24.332359   30316 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.354099   30316 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220728155420-12923"
	W0728 16:00:24.354103   30316 addons.go:162] addon metrics-server should already be in state true
	I0728 16:00:24.354085   30316 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220728155420-12923"
	W0728 16:00:24.354129   30316 addons.go:162] addon dashboard should already be in state true
	I0728 16:00:24.354132   30316 host.go:66] Checking if "default-k8s-different-port-20220728155420-12923" exists ...
	I0728 16:00:24.354131   30316 host.go:66] Checking if "default-k8s-different-port-20220728155420-12923" exists ...
	I0728 16:00:24.354155   30316 host.go:66] Checking if "default-k8s-different-port-20220728155420-12923" exists ...
	I0728 16:00:24.354401   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.354528   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.355541   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.355684   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.390460   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.390458   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0728 16:00:24.535958   30316 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0728 16:00:24.520667   30316 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.567843   30316 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220728155420-12923" to be "Ready" ...
	I0728 16:00:24.573082   30316 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0728 16:00:24.573099   30316 addons.go:162] addon default-storageclass should already be in state true
	I0728 16:00:24.635787   30316 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0728 16:00:24.635861   30316 host.go:66] Checking if "default-k8s-different-port-20220728155420-12923" exists ...
	I0728 16:00:24.639025   30316 node_ready.go:49] node "default-k8s-different-port-20220728155420-12923" has status "Ready":"True"
	I0728 16:00:24.657058   30316 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0728 16:00:24.678275   30316 node_ready.go:38] duration metric: took 42.466015ms waiting for node "default-k8s-different-port-20220728155420-12923" to be "Ready" ...
	I0728 16:00:24.737012   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0728 16:00:24.737029   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0728 16:00:24.737022   30316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 16:00:24.678484   30316 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 16:00:24.679284   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.737050   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 16:00:24.715939   30316 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0728 16:00:24.737094   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.737093   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0728 16:00:24.737131   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.737171   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.746547   30316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:24.836630   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 16:00:24.840168   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 16:00:24.841441   30316 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 16:00:24.841451   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 16:00:24.841523   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.841520   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 16:00:24.910866   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 16:00:24.997852   30316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 16:00:25.003279   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0728 16:00:25.003293   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0728 16:00:25.093643   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0728 16:00:25.093667   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0728 16:00:25.113432   30316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 16:00:25.118821   30316 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0728 16:00:25.118833   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0728 16:00:25.202782   30316 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0728 16:00:25.202797   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0728 16:00:25.207372   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0728 16:00:25.207387   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0728 16:00:25.283674   30316 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 16:00:25.283692   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0728 16:00:25.299996   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0728 16:00:25.300010   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0728 16:00:25.401995   30316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 16:00:25.411240   30316 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.020728929s)
	I0728 16:00:25.411263   30316 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0728 16:00:25.482536   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0728 16:00:25.482550   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0728 16:00:25.502802   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0728 16:00:25.502823   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0728 16:00:25.586073   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0728 16:00:25.586092   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0728 16:00:25.689008   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0728 16:00:25.689025   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0728 16:00:25.705569   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 16:00:25.705582   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0728 16:00:25.723569   30316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 16:00:25.991093   30316 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:26.705744   30316 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0728 16:00:26.762993   30316 addons.go:414] enableAddons completed in 2.455353871s
	I0728 16:00:26.767417   30316 pod_ready.go:102] pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace has status "Ready":"False"
	I0728 16:00:29.265572   30316 pod_ready.go:102] pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace has status "Ready":"False"
	I0728 16:00:31.779396   30316 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-nmb74" not found
	I0728 16:00:31.779412   30316 pod_ready.go:81] duration metric: took 7.03295771s waiting for pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace to be "Ready" ...
	E0728 16:00:31.779420   30316 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-nmb74" not found
	I0728 16:00:31.779425   30316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vm6w7" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.784329   30316 pod_ready.go:92] pod "coredns-6d4b75cb6d-vm6w7" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.784338   30316 pod_ready.go:81] duration metric: took 4.908258ms waiting for pod "coredns-6d4b75cb6d-vm6w7" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.784346   30316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.788691   30316 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.788700   30316 pod_ready.go:81] duration metric: took 4.337288ms waiting for pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.788706   30316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.793360   30316 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.793369   30316 pod_ready.go:81] duration metric: took 4.65919ms waiting for pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.793376   30316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.798075   30316 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.798084   30316 pod_ready.go:81] duration metric: took 4.703602ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.798090   30316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pv62j" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.978720   30316 pod_ready.go:92] pod "kube-proxy-pv62j" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.978732   30316 pod_ready.go:81] duration metric: took 180.639115ms waiting for pod "kube-proxy-pv62j" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.978741   30316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:32.383133   30316 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:32.383144   30316 pod_ready.go:81] duration metric: took 404.402924ms waiting for pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:32.383151   30316 pod_ready.go:38] duration metric: took 7.646241882s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 16:00:32.383168   30316 api_server.go:51] waiting for apiserver process to appear ...
	I0728 16:00:32.383219   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:00:32.394067   30316 api_server.go:71] duration metric: took 8.086551332s to wait for apiserver process to appear ...
	I0728 16:00:32.394085   30316 api_server.go:87] waiting for apiserver healthz status ...
	I0728 16:00:32.394093   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 16:00:32.399509   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 200:
	ok
	I0728 16:00:32.400740   30316 api_server.go:140] control plane version: v1.24.3
	I0728 16:00:32.400749   30316 api_server.go:130] duration metric: took 6.659375ms to wait for apiserver health ...
	I0728 16:00:32.400754   30316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 16:00:32.580287   30316 system_pods.go:59] 8 kube-system pods found
	I0728 16:00:32.580302   30316 system_pods.go:61] "coredns-6d4b75cb6d-vm6w7" [00979eea-4984-40c4-9975-4e9ef9c33a1f] Running
	I0728 16:00:32.580306   30316 system_pods.go:61] "etcd-default-k8s-different-port-20220728155420-12923" [8acc6fb5-6fbb-4eb7-ad74-bc24bde492ae] Running
	I0728 16:00:32.580310   30316 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220728155420-12923" [1d24c376-e9f9-4dde-bf95-9c7f4a5ff6de] Running
	I0728 16:00:32.580314   30316 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220728155420-12923" [8fa6e4db-daea-4dcc-a3d3-ad78a001c2e7] Running
	I0728 16:00:32.580317   30316 system_pods.go:61] "kube-proxy-pv62j" [25bad633-52dd-438c-ade8-4b59d566d336] Running
	I0728 16:00:32.580321   30316 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220728155420-12923" [9243a062-f92f-4f4c-8388-a47f60b2b439] Running
	I0728 16:00:32.580329   30316 system_pods.go:61] "metrics-server-5c6f97fb75-58sqm" [55c181c1-c5da-4dbb-8b61-2522d22261f4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 16:00:32.580343   30316 system_pods.go:61] "storage-provisioner" [f0b7ea65-aadc-49a7-8498-f541759e61a9] Running
	I0728 16:00:32.580348   30316 system_pods.go:74] duration metric: took 179.593374ms to wait for pod list to return data ...
	I0728 16:00:32.580353   30316 default_sa.go:34] waiting for default service account to be created ...
	I0728 16:00:32.760390   30316 default_sa.go:45] found service account: "default"
	I0728 16:00:32.760401   30316 default_sa.go:55] duration metric: took 180.047618ms for default service account to be created ...
	I0728 16:00:32.760406   30316 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 16:00:32.965098   30316 system_pods.go:86] 8 kube-system pods found
	I0728 16:00:32.965112   30316 system_pods.go:89] "coredns-6d4b75cb6d-vm6w7" [00979eea-4984-40c4-9975-4e9ef9c33a1f] Running
	I0728 16:00:32.965116   30316 system_pods.go:89] "etcd-default-k8s-different-port-20220728155420-12923" [8acc6fb5-6fbb-4eb7-ad74-bc24bde492ae] Running
	I0728 16:00:32.965120   30316 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220728155420-12923" [1d24c376-e9f9-4dde-bf95-9c7f4a5ff6de] Running
	I0728 16:00:32.965124   30316 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220728155420-12923" [8fa6e4db-daea-4dcc-a3d3-ad78a001c2e7] Running
	I0728 16:00:32.965127   30316 system_pods.go:89] "kube-proxy-pv62j" [25bad633-52dd-438c-ade8-4b59d566d336] Running
	I0728 16:00:32.965132   30316 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220728155420-12923" [9243a062-f92f-4f4c-8388-a47f60b2b439] Running
	I0728 16:00:32.965138   30316 system_pods.go:89] "metrics-server-5c6f97fb75-58sqm" [55c181c1-c5da-4dbb-8b61-2522d22261f4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 16:00:32.965143   30316 system_pods.go:89] "storage-provisioner" [f0b7ea65-aadc-49a7-8498-f541759e61a9] Running
	I0728 16:00:32.965147   30316 system_pods.go:126] duration metric: took 204.741682ms to wait for k8s-apps to be running ...
	I0728 16:00:32.965153   30316 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 16:00:32.965202   30316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 16:00:32.986222   30316 system_svc.go:56] duration metric: took 21.063891ms WaitForService to wait for kubelet.
	I0728 16:00:32.986237   30316 kubeadm.go:572] duration metric: took 8.678736077s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0728 16:00:32.986251   30316 node_conditions.go:102] verifying NodePressure condition ...
	I0728 16:00:33.178411   30316 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 16:00:33.178425   30316 node_conditions.go:123] node cpu capacity is 6
	I0728 16:00:33.178437   30316 node_conditions.go:105] duration metric: took 192.177388ms to run NodePressure ...
	I0728 16:00:33.178453   30316 start.go:216] waiting for startup goroutines ...
	I0728 16:00:33.214457   30316 start.go:506] kubectl: 1.24.2, cluster: 1.24.3 (minor skew: 0)
	I0728 16:00:33.238006   30316 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220728155420-12923" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 22:55:32 UTC, end at Thu 2022-07-28 23:01:22 UTC. --
	Jul 28 23:00:00 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:00.906283694Z" level=info msg="ignoring event" container=7246ede4638f515edb9484d4944a430b44f622e9665eee97705f7e59f6a9a92c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:00 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:00.986204113Z" level=info msg="ignoring event" container=d93f625926a3364de73852c95bd0c99f02b781168ca34a014acfe4816ec27919 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.063714743Z" level=info msg="ignoring event" container=20a7f0c6bb4018c605561b36be8e62ea18c79be47171542d7ea40fa0d17104e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.139402787Z" level=info msg="ignoring event" container=a1aa3716a38f0987f4fdca57a75fd939086b82668ce356395f2cadd81da469ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.207377822Z" level=info msg="ignoring event" container=e4f6394e3c8557326ba43e6c8ca6f19bfff1e124d52845e06869638d3e633d97 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.275029814Z" level=info msg="ignoring event" container=175124091cbe84068232dd9d624db0f5cdf618007ea8656aff90201e990c2268 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.342483610Z" level=info msg="ignoring event" container=2f023feef52c478633d427518539315695a740777ffae167493d3b1372350cf3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.433721908Z" level=info msg="ignoring event" container=fa03954d2288eab9eb6f304bab496c4cc114656ec8a90c65f683f2ea5f8d18d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.501846169Z" level=info msg="ignoring event" container=6bac6aba3bf9a76a101dda8b021ad6811f33b8f9cac13b1f81324fb94b644b6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.575626999Z" level=info msg="ignoring event" container=09abd750246d2a4978b1127cf196aca26a803840822bec351d3c98b4654d2871 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.648041486Z" level=info msg="ignoring event" container=a8fa1df66b25ac2a8104871f83080c4d0f37631394f24a9cb5e02b2077bdb55d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.787047193Z" level=info msg="ignoring event" container=ed4e00e95ddcd163fee6a5aa45e06076cfeadbbd67e89539733742977af388df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:26 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:26.854448762Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:00:26 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:26.854495261Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:00:26 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:26.855700145Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:00:28 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:28.259519307Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 28 23:00:30 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:30.436403882Z" level=info msg="ignoring event" container=6a5caa98c9865bb244e15a0cfd68d280c61cb122072bd49bc1ee7bbaa8293f73 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:30 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:30.517228141Z" level=info msg="ignoring event" container=bef75a25eb7e6cc1518887a9e2014aef4c41e855f0a0f3fb4acbc51837379f90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:34 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:34.831105792Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 28 23:00:35 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:35.114544704Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 28 23:00:38 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:38.120610373Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:00:38 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:38.120659439Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:00:38 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:38.187523181Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:00:38 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:38.334370351Z" level=info msg="ignoring event" container=aad3ab43569540d94592e1258714a770d72430ad5a8526edc39c8aec81f39d53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:38 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:38.558528169Z" level=info msg="ignoring event" container=02b51e5ec9ed0854e334add4f068e6e1a0777b05b3d76db4fc171310ad19ae80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	02b51e5ec9ed0       a90209bb39e3d                                                                                    45 seconds ago       Exited              dashboard-metrics-scraper   1                   015bf6710e787
	9e8e6f7bfaeb7       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   49 seconds ago       Running             kubernetes-dashboard        0                   d3e1af9348dad
	7da6a2c799210       6e38f40d628db                                                                                    57 seconds ago       Running             storage-provisioner         0                   f7839ec11aa40
	e9059bdd51b3e       a4ca41631cc7a                                                                                    58 seconds ago       Running             coredns                     0                   3110a60df04ed
	fb4ee731eaddc       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   650b13da5b1ca
	f6ba3dae21920       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   2d0741186520e
	612835ba363da       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   da27fa6a56a5f
	8986c1e4a8052       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   58dc75b4ddf15
	c1b00dc8df258       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   7fb35b1eaeced
	
	* 
	* ==> coredns [e9059bdd51b3] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220728155420-12923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220728155420-12923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
	                    minikube.k8s.io/name=default-k8s-different-port-20220728155420-12923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_28T16_00_10_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 23:00:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220728155420-12923
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 23:01:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Jul 2022 23:01:20 +0000   Thu, 28 Jul 2022 23:00:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Jul 2022 23:01:20 +0000   Thu, 28 Jul 2022 23:00:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Jul 2022 23:01:20 +0000   Thu, 28 Jul 2022 23:00:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Jul 2022 23:01:20 +0000   Thu, 28 Jul 2022 23:01:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20220728155420-12923
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                d8d926e1-386e-4950-b4f3-f8e0acfc6b16
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-vm6w7                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     60s
	  kube-system                 etcd-default-k8s-different-port-20220728155420-12923                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220728155420-12923             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220728155420-12923    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-pv62j                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220728155420-12923             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 metrics-server-5c6f97fb75-58sqm                                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         58s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-bgrbr                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-7sfms                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 59s                kube-proxy       
	  Normal  Starting                 79s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s (x4 over 79s)  kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x4 over 79s)  kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x3 over 79s)  kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  73s                kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s                kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s                kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasSufficientPID
	  Normal  NodeReady                62s                kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeReady
	  Normal  RegisteredNode           60s                node-controller  Node default-k8s-different-port-20220728155420-12923 event: Registered Node default-k8s-different-port-20220728155420-12923 in Controller
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s                 kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3s                 kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeNotReady
	  Normal  NodeReady                3s                 kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [612835ba363d] <==
	* {"level":"info","ts":"2022-07-28T23:00:05.036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-07-28T23:00:05.036Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-07-28T23:00:05.038Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T23:00:05.038Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-28T23:00:05.038Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T23:00:05.038Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T23:00:05.038Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-different-port-20220728155420-12923 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T23:00:05.933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:00:05.933Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:00:05.933Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:00:05.933Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T23:00:05.936Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-07-28T23:00:05.938Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T23:00:05.938Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  23:01:23 up  1:22,  0 users,  load average: 0.55, 0.96, 1.04
	Linux default-k8s-different-port-20220728155420-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [f6ba3dae2192] <==
	* I0728 23:00:08.728394       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0728 23:00:08.979216       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0728 23:00:09.000594       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0728 23:00:09.125339       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0728 23:00:09.129173       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0728 23:00:09.129829       1 controller.go:611] quota admission added evaluator for: endpoints
	I0728 23:00:09.132431       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0728 23:00:09.870432       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0728 23:00:10.745451       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0728 23:00:10.751123       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0728 23:00:10.758975       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0728 23:00:10.844768       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0728 23:00:23.292163       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0728 23:00:23.504127       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0728 23:00:23.953823       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0728 23:00:25.936348       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.103.198.26]
	I0728 23:00:26.631910       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.103.124.158]
	I0728 23:00:26.642072       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.109.153.185]
	W0728 23:00:26.831315       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 23:00:26.831362       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0728 23:00:26.831406       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 23:00:26.831421       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 23:00:26.831430       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0728 23:00:26.832496       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [8986c1e4a805] <==
	* I0728 23:00:23.949342       1 shared_informer.go:262] Caches are synced for garbage collector
	I0728 23:00:23.949469       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0728 23:00:23.955724       1 shared_informer.go:262] Caches are synced for garbage collector
	I0728 23:00:25.839040       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0728 23:00:25.843233       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0728 23:00:25.848594       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0728 23:00:25.893367       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-58sqm"
	I0728 23:00:26.503867       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0728 23:00:26.508393       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 23:00:26.513589       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 23:00:26.518621       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 23:00:26.518670       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 23:00:26.518811       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0728 23:00:26.524376       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 23:00:26.524416       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 23:00:26.526690       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 23:00:26.533093       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 23:00:26.533145       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 23:00:26.533295       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 23:00:26.538138       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 23:00:26.538478       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 23:00:26.545339       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-7sfms"
	I0728 23:00:26.595094       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-bgrbr"
	E0728 23:01:20.471988       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0728 23:01:20.485480       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [fb4ee731eadd] <==
	* I0728 23:00:23.931427       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0728 23:00:23.931570       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0728 23:00:23.931618       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 23:00:23.949037       1 server_others.go:206] "Using iptables Proxier"
	I0728 23:00:23.949127       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 23:00:23.949148       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 23:00:23.949165       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 23:00:23.949199       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 23:00:23.949337       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 23:00:23.949550       1 server.go:661] "Version info" version="v1.24.3"
	I0728 23:00:23.949610       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 23:00:23.949990       1 config.go:317] "Starting service config controller"
	I0728 23:00:23.950030       1 config.go:226] "Starting endpoint slice config controller"
	I0728 23:00:23.950038       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 23:00:23.950038       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 23:00:23.950942       1 config.go:444] "Starting node config controller"
	I0728 23:00:23.951074       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 23:00:24.050916       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 23:00:24.051657       1 shared_informer.go:262] Caches are synced for node config
	I0728 23:00:24.051744       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [c1b00dc8df25] <==
	* W0728 23:00:07.820022       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0728 23:00:07.820079       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0728 23:00:07.820147       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0728 23:00:07.820159       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0728 23:00:07.820275       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0728 23:00:07.820305       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0728 23:00:07.820372       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0728 23:00:07.820414       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0728 23:00:07.820291       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0728 23:00:07.820448       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0728 23:00:07.820492       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0728 23:00:07.820501       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0728 23:00:07.820756       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0728 23:00:07.820767       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0728 23:00:07.821156       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0728 23:00:07.821167       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0728 23:00:07.823953       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0728 23:00:07.823967       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0728 23:00:07.824222       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0728 23:00:07.824257       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0728 23:00:08.716475       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0728 23:00:08.716525       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0728 23:00:08.962353       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0728 23:00:08.962449       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0728 23:00:11.617726       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 22:55:32 UTC, end at Thu 2022-07-28 23:01:24 UTC. --
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.908734    9763 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.908805    9763 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.909199    9763 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.909251    9763 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.909319    9763 topology_manager.go:200] "Topology Admit Handler"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.956992    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f0b7ea65-aadc-49a7-8498-f541759e61a9-tmp\") pod \"storage-provisioner\" (UID: \"f0b7ea65-aadc-49a7-8498-f541759e61a9\") " pod="kube-system/storage-provisioner"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957045    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6059b3a5-1140-4e97-b06a-44811e5c5844-tmp-volume\") pod \"dashboard-metrics-scraper-dffd48c4c-bgrbr\" (UID: \"6059b3a5-1140-4e97-b06a-44811e5c5844\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-bgrbr"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957137    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25bad633-52dd-438c-ade8-4b59d566d336-lib-modules\") pod \"kube-proxy-pv62j\" (UID: \"25bad633-52dd-438c-ade8-4b59d566d336\") " pod="kube-system/kube-proxy-pv62j"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957154    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/55c181c1-c5da-4dbb-8b61-2522d22261f4-tmp-dir\") pod \"metrics-server-5c6f97fb75-58sqm\" (UID: \"55c181c1-c5da-4dbb-8b61-2522d22261f4\") " pod="kube-system/metrics-server-5c6f97fb75-58sqm"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957171    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2lzv\" (UniqueName: \"kubernetes.io/projected/55c181c1-c5da-4dbb-8b61-2522d22261f4-kube-api-access-f2lzv\") pod \"metrics-server-5c6f97fb75-58sqm\" (UID: \"55c181c1-c5da-4dbb-8b61-2522d22261f4\") " pod="kube-system/metrics-server-5c6f97fb75-58sqm"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957187    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6wb6\" (UniqueName: \"kubernetes.io/projected/00979eea-4984-40c4-9975-4e9ef9c33a1f-kube-api-access-z6wb6\") pod \"coredns-6d4b75cb6d-vm6w7\" (UID: \"00979eea-4984-40c4-9975-4e9ef9c33a1f\") " pod="kube-system/coredns-6d4b75cb6d-vm6w7"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957200    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25bad633-52dd-438c-ade8-4b59d566d336-xtables-lock\") pod \"kube-proxy-pv62j\" (UID: \"25bad633-52dd-438c-ade8-4b59d566d336\") " pod="kube-system/kube-proxy-pv62j"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957215    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/98861945-a85a-4fc2-8f87-03ab3cd624cf-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-7sfms\" (UID: \"98861945-a85a-4fc2-8f87-03ab3cd624cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-7sfms"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957229    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbp5n\" (UniqueName: \"kubernetes.io/projected/f0b7ea65-aadc-49a7-8498-f541759e61a9-kube-api-access-pbp5n\") pod \"storage-provisioner\" (UID: \"f0b7ea65-aadc-49a7-8498-f541759e61a9\") " pod="kube-system/storage-provisioner"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957245    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bkzm\" (UniqueName: \"kubernetes.io/projected/98861945-a85a-4fc2-8f87-03ab3cd624cf-kube-api-access-6bkzm\") pod \"kubernetes-dashboard-5fd5574d9f-7sfms\" (UID: \"98861945-a85a-4fc2-8f87-03ab3cd624cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-7sfms"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957263    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99j6q\" (UniqueName: \"kubernetes.io/projected/6059b3a5-1140-4e97-b06a-44811e5c5844-kube-api-access-99j6q\") pod \"dashboard-metrics-scraper-dffd48c4c-bgrbr\" (UID: \"6059b3a5-1140-4e97-b06a-44811e5c5844\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-bgrbr"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957279    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00979eea-4984-40c4-9975-4e9ef9c33a1f-config-volume\") pod \"coredns-6d4b75cb6d-vm6w7\" (UID: \"00979eea-4984-40c4-9975-4e9ef9c33a1f\") " pod="kube-system/coredns-6d4b75cb6d-vm6w7"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957294    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hbbj\" (UniqueName: \"kubernetes.io/projected/25bad633-52dd-438c-ade8-4b59d566d336-kube-api-access-8hbbj\") pod \"kube-proxy-pv62j\" (UID: \"25bad633-52dd-438c-ade8-4b59d566d336\") " pod="kube-system/kube-proxy-pv62j"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957309    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25bad633-52dd-438c-ade8-4b59d566d336-kube-proxy\") pod \"kube-proxy-pv62j\" (UID: \"25bad633-52dd-438c-ade8-4b59d566d336\") " pod="kube-system/kube-proxy-pv62j"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957317    9763 reconciler.go:157] "Reconciler: start to sync state"
	Jul 28 23:01:23 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:23.106932    9763 request.go:601] Waited for 1.049973974s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Jul 28 23:01:23 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:23.140676    9763 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220728155420-12923\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220728155420-12923"
	Jul 28 23:01:23 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:23.322911    9763 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220728155420-12923\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220728155420-12923"
	Jul 28 23:01:23 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:23.566662    9763 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220728155420-12923\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220728155420-12923"
	Jul 28 23:01:23 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:23.766053    9763 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220728155420-12923\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220728155420-12923"
	
	* 
	* ==> kubernetes-dashboard [9e8e6f7bfaeb] <==
	* 2022/07/28 23:00:34 Using namespace: kubernetes-dashboard
	2022/07/28 23:00:34 Using in-cluster config to connect to apiserver
	2022/07/28 23:00:34 Using secret token for csrf signing
	2022/07/28 23:00:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/28 23:00:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/28 23:00:34 Successful initial request to the apiserver, version: v1.24.3
	2022/07/28 23:00:34 Generating JWE encryption key
	2022/07/28 23:00:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/28 23:00:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/28 23:00:34 Initializing JWE encryption key from synchronized object
	2022/07/28 23:00:34 Creating in-cluster Sidecar client
	2022/07/28 23:00:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/28 23:00:34 Serving insecurely on HTTP port: 9090
	2022/07/28 23:01:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/28 23:00:34 Starting overwatch
	
	* 
	* ==> storage-provisioner [7da6a2c79921] <==
	* I0728 23:00:26.252475       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 23:00:26.297612       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 23:00:26.297713       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 23:00:26.305962       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 23:00:26.306163       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220728155420-12923_472ec422-e9ee-424f-a0a5-dbfdfe69d361!
	I0728 23:00:26.306472       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e074a991-8d27-48cc-bdc9-ac906f837298", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220728155420-12923_472ec422-e9ee-424f-a0a5-dbfdfe69d361 became leader
	I0728 23:00:26.407219       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220728155420-12923_472ec422-e9ee-424f-a0a5-dbfdfe69d361!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220728155420-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-58sqm
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220728155420-12923 describe pod metrics-server-5c6f97fb75-58sqm
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220728155420-12923 describe pod metrics-server-5c6f97fb75-58sqm: exit status 1 (268.76023ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-58sqm" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220728155420-12923 describe pod metrics-server-5c6f97fb75-58sqm: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220728155420-12923
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220728155420-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d8d7f92f3558c28362c6b5ddb38dcf96c632b6886b3e774ea6e79d70f629962d",
	        "Created": "2022-07-28T22:54:26.652227115Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291298,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:55:31.996485314Z",
	            "FinishedAt": "2022-07-28T22:55:30.055048022Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/d8d7f92f3558c28362c6b5ddb38dcf96c632b6886b3e774ea6e79d70f629962d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d8d7f92f3558c28362c6b5ddb38dcf96c632b6886b3e774ea6e79d70f629962d/hostname",
	        "HostsPath": "/var/lib/docker/containers/d8d7f92f3558c28362c6b5ddb38dcf96c632b6886b3e774ea6e79d70f629962d/hosts",
	        "LogPath": "/var/lib/docker/containers/d8d7f92f3558c28362c6b5ddb38dcf96c632b6886b3e774ea6e79d70f629962d/d8d7f92f3558c28362c6b5ddb38dcf96c632b6886b3e774ea6e79d70f629962d-json.log",
	        "Name": "/default-k8s-different-port-20220728155420-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220728155420-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220728155420-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/19cf15522acce53c75e401f40b025a238bf85a62355c17678c2b59e6f96d73c1-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/19cf15522acce53c75e401f40b025a238bf85a62355c17678c2b59e6f96d73c1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/19cf15522acce53c75e401f40b025a238bf85a62355c17678c2b59e6f96d73c1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/19cf15522acce53c75e401f40b025a238bf85a62355c17678c2b59e6f96d73c1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220728155420-12923",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220728155420-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220728155420-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220728155420-12923",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220728155420-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "341f7fa70f2c12d2fc7ff1a8c5a50fbf954d395bb7cfece6bf2947e9163ff9f2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59511"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59512"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59513"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59514"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "59515"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/341f7fa70f2c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220728155420-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d8d7f92f3558",
	                        "default-k8s-different-port-20220728155420-12923"
	                    ],
	                    "NetworkID": "a61796860495adbfacdefc21f813297bd80016d2dc54d641ab8799dccf8c786d",
	                    "EndpointID": "63d545a57fb144ddd794307d9258933a2fef916ed7e08004f84f000eafb63a4b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-different-port-20220728155420-12923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p default-k8s-different-port-20220728155420-12923 logs -n 25: (2.810732777s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                | old-k8s-version-20220728153807-12923            | jenkins | v1.26.0 | 28 Jul 22 15:43 PDT |                     |
	|         | old-k8s-version-20220728153807-12923              |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --kvm-network=default                 |                                                 |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                 |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                                                 |         |         |                     |                     |
	|         | --keep-context=false --driver=docker              |                                                 |         |         |                     |                     |
	|         |  --kubernetes-version=v1.16.0                     |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:46 PDT | 28 Jul 22 15:46 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	| delete  | -p                                                | no-preload-20220728153949-12923                 | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | no-preload-20220728153949-12923                   |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:47 PDT | 28 Jul 22 15:47 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:48 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                   |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                         |                                                 |         |         |                     |                     |
	|         | --driver=docker                                   |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                  |                                                 |         |         |                     |                     |
	| delete  | -p                                                | disable-driver-mounts-20220728155419-12923      | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | disable-driver-mounts-20220728155419-12923        |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                     |                     |
	| stop    | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                     |                     |
	| start   | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker             |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                      |                                                 |         |         |                     |                     |
	| ssh     | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:00 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                        |                                                 |         |         |                     |                     |
	| pause   | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:00 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	| unpause | -p                                                | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:01 PDT |
	|         | default-k8s-different-port-20220728155420-12923   |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                     |                     |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 15:55:30
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 15:55:30.758749   30316 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:55:30.758949   30316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:55:30.758955   30316 out.go:309] Setting ErrFile to fd 2...
	I0728 15:55:30.758959   30316 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:55:30.759060   30316 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:55:30.759520   30316 out.go:303] Setting JSON to false
	I0728 15:55:30.774489   30316 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":9972,"bootTime":1659038958,"procs":350,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 15:55:30.774588   30316 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 15:55:30.795830   30316 out.go:177] * [default-k8s-different-port-20220728155420-12923] minikube v1.26.0 on Darwin 12.5
	I0728 15:55:30.839027   30316 notify.go:193] Checking for updates...
	I0728 15:55:30.860766   30316 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 15:55:30.882054   30316 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:55:30.903796   30316 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 15:55:30.924802   30316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 15:55:30.946060   30316 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 15:55:30.968665   30316 config.go:178] Loaded profile config "default-k8s-different-port-20220728155420-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:55:30.969316   30316 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 15:55:31.036834   30316 docker.go:137] docker version: linux-20.10.17
	I0728 15:55:31.037003   30316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:55:31.170708   30316 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:55:31.104106473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:55:31.213459   30316 out.go:177] * Using the docker driver based on existing profile
	I0728 15:55:31.235399   30316 start.go:284] selected driver: docker
	I0728 15:55:31.235424   30316 start.go:808] validating driver "docker" against &{Name:default-k8s-different-port-20220728155420-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port
-20220728155420-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:tru
e] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:55:31.235546   30316 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 15:55:31.238871   30316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 15:55:31.370748   30316 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 22:55:31.306721001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 15:55:31.370921   30316 start_flags.go:853] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0728 15:55:31.370940   30316 cni.go:95] Creating CNI manager for ""
	I0728 15:55:31.370951   30316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:55:31.370961   30316 start_flags.go:310] config:
	{Name:default-k8s-different-port-20220728155420-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728155420-12923 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netw
ork: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:55:31.392647   30316 out.go:177] * Starting control plane node default-k8s-different-port-20220728155420-12923 in cluster default-k8s-different-port-20220728155420-12923
	I0728 15:55:31.414823   30316 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 15:55:31.436734   30316 out.go:177] * Pulling base image ...
	I0728 15:55:31.478779   30316 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:55:31.478823   30316 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 15:55:31.478857   30316 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 15:55:31.478885   30316 cache.go:57] Caching tarball of preloaded images
	I0728 15:55:31.479127   30316 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 15:55:31.479764   30316 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 15:55:31.480270   30316 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/config.json ...
	I0728 15:55:31.541959   30316 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 15:55:31.541977   30316 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 15:55:31.541987   30316 cache.go:208] Successfully downloaded all kic artifacts
	I0728 15:55:31.542029   30316 start.go:370] acquiring machines lock for default-k8s-different-port-20220728155420-12923: {Name:mk0e822f9f2b9adffe1c022a5e24460488a5334a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 15:55:31.542121   30316 start.go:374] acquired machines lock for "default-k8s-different-port-20220728155420-12923" in 68.722µs
	I0728 15:55:31.542144   30316 start.go:95] Skipping create...Using existing machine configuration
	I0728 15:55:31.542153   30316 fix.go:55] fixHost starting: 
	I0728 15:55:31.542383   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 15:55:31.605720   30316 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220728155420-12923: state=Stopped err=<nil>
	W0728 15:55:31.605754   30316 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 15:55:31.627632   30316 out.go:177] * Restarting existing docker container for "default-k8s-different-port-20220728155420-12923" ...
	I0728 15:55:31.649486   30316 cli_runner.go:164] Run: docker start default-k8s-different-port-20220728155420-12923
	I0728 15:55:31.993066   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 15:55:32.060550   30316 kic.go:415] container "default-k8s-different-port-20220728155420-12923" state is running.
	I0728 15:55:32.061181   30316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.129615   30316 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/config.json ...
	I0728 15:55:32.130011   30316 machine.go:88] provisioning docker machine ...
	I0728 15:55:32.130035   30316 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220728155420-12923"
	I0728 15:55:32.130125   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.198326   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:32.198552   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:32.198567   30316 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220728155420-12923 && echo "default-k8s-different-port-20220728155420-12923" | sudo tee /etc/hostname
	I0728 15:55:32.326053   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220728155420-12923
	
	I0728 15:55:32.326140   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.391769   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:32.391946   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:32.391963   30316 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220728155420-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220728155420-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220728155420-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 15:55:32.512852   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:55:32.512877   30316 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 15:55:32.512901   30316 ubuntu.go:177] setting up certificates
	I0728 15:55:32.512910   30316 provision.go:83] configureAuth start
	I0728 15:55:32.512984   30316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.578514   30316 provision.go:138] copyHostCerts
	I0728 15:55:32.578594   30316 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 15:55:32.578603   30316 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 15:55:32.578690   30316 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 15:55:32.578899   30316 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 15:55:32.578917   30316 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 15:55:32.578982   30316 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 15:55:32.579122   30316 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 15:55:32.579139   30316 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 15:55:32.579198   30316 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 15:55:32.579317   30316 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220728155420-12923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220728155420-12923]
	I0728 15:55:32.674959   30316 provision.go:172] copyRemoteCerts
	I0728 15:55:32.675028   30316 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 15:55:32.675080   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.741088   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:32.827517   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 15:55:32.845806   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0728 15:55:32.863725   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 15:55:32.880989   30316 provision.go:86] duration metric: configureAuth took 368.070725ms
	I0728 15:55:32.881002   30316 ubuntu.go:193] setting minikube options for container-runtime
	I0728 15:55:32.881150   30316 config.go:178] Loaded profile config "default-k8s-different-port-20220728155420-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:55:32.881205   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:32.946636   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:32.946805   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:32.946817   30316 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 15:55:33.067666   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 15:55:33.067682   30316 ubuntu.go:71] root file system type: overlay
	I0728 15:55:33.067838   30316 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 15:55:33.067911   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.131727   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:33.131899   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:33.131963   30316 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 15:55:33.261631   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 15:55:33.261710   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.325641   30316 main.go:134] libmachine: Using SSH client type: native
	I0728 15:55:33.325808   30316 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 59511 <nil> <nil>}
	I0728 15:55:33.325822   30316 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 15:55:33.449804   30316 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 15:55:33.449827   30316 machine.go:91] provisioned docker machine in 1.319828847s
	I0728 15:55:33.449836   30316 start.go:307] post-start starting for "default-k8s-different-port-20220728155420-12923" (driver="docker")
	I0728 15:55:33.449845   30316 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 15:55:33.449924   30316 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 15:55:33.449974   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.513907   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:33.601033   30316 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 15:55:33.604777   30316 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 15:55:33.604802   30316 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 15:55:33.604815   30316 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 15:55:33.604824   30316 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 15:55:33.604833   30316 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 15:55:33.604935   30316 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 15:55:33.605075   30316 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 15:55:33.605215   30316 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 15:55:33.611867   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:55:33.627853   30316 start.go:310] post-start completed in 178.00861ms
	I0728 15:55:33.627932   30316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:55:33.627977   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.695005   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:33.783324   30316 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 15:55:33.787801   30316 fix.go:57] fixHost completed within 2.245684947s
	I0728 15:55:33.787824   30316 start.go:82] releasing machines lock for "default-k8s-different-port-20220728155420-12923", held for 2.245723265s
	I0728 15:55:33.787920   30316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.853332   30316 ssh_runner.go:195] Run: systemctl --version
	I0728 15:55:33.853337   30316 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 15:55:33.853402   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.853416   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:33.920417   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:33.920583   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 15:55:34.004995   30316 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 15:55:34.200488   30316 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 15:55:34.200552   30316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 15:55:34.212709   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 15:55:34.225673   30316 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 15:55:34.294929   30316 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 15:55:34.354645   30316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:55:34.418257   30316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 15:55:34.646963   30316 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 15:55:34.712035   30316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 15:55:34.777443   30316 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 15:55:34.786754   30316 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 15:55:34.786821   30316 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 15:55:34.790552   30316 start.go:471] Will wait 60s for crictl version
	I0728 15:55:34.790591   30316 ssh_runner.go:195] Run: sudo crictl version
	I0728 15:55:34.891481   30316 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 15:55:34.891547   30316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:55:34.925151   30316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 15:55:35.004758   30316 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 15:55:35.004838   30316 cli_runner.go:164] Run: docker exec -t default-k8s-different-port-20220728155420-12923 dig +short host.docker.internal
	I0728 15:55:35.122418   30316 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 15:55:35.122524   30316 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 15:55:35.126670   30316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:55:35.135876   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:35.199963   30316 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 15:55:35.200048   30316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:55:35.230417   30316 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0728 15:55:35.230434   30316 docker.go:542] Images already preloaded, skipping extraction
	I0728 15:55:35.230509   30316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 15:55:35.259780   30316 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0728 15:55:35.259801   30316 cache_images.go:84] Images are preloaded, skipping loading
	I0728 15:55:35.259876   30316 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 15:55:35.335289   30316 cni.go:95] Creating CNI manager for ""
	I0728 15:55:35.335301   30316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:55:35.335314   30316 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0728 15:55:35.335329   30316 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8444 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220728155420-12923 NodeName:default-k8s-different-port-20220728155420-12923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 Cgr
oupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 15:55:35.335442   30316 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-different-port-20220728155420-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 15:55:35.335545   30316 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-different-port-20220728155420-12923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728155420-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0728 15:55:35.335608   30316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 15:55:35.342895   30316 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 15:55:35.342938   30316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 15:55:35.349853   30316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (509 bytes)
	I0728 15:55:35.361884   30316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 15:55:35.374675   30316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2069 bytes)
	I0728 15:55:35.386579   30316 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0728 15:55:35.390206   30316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 15:55:35.399217   30316 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923 for IP: 192.168.67.2
	I0728 15:55:35.399333   30316 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 15:55:35.399381   30316 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 15:55:35.399467   30316 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.key
	I0728 15:55:35.399524   30316 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/apiserver.key.c7fa3a9e
	I0728 15:55:35.399597   30316 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/proxy-client.key
	I0728 15:55:35.399795   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 15:55:35.399835   30316 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 15:55:35.399850   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 15:55:35.399884   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 15:55:35.399915   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 15:55:35.399943   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 15:55:35.400003   30316 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 15:55:35.400535   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 15:55:35.416976   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0728 15:55:35.433119   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 15:55:35.449169   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0728 15:55:35.465944   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 15:55:35.482473   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 15:55:35.499106   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 15:55:35.515045   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 15:55:35.531222   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 15:55:35.548054   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 15:55:35.564648   30316 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 15:55:35.581798   30316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 15:55:35.593965   30316 ssh_runner.go:195] Run: openssl version
	I0728 15:55:35.599527   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 15:55:35.607305   30316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:55:35.611358   30316 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:55:35.611403   30316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 15:55:35.616434   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 15:55:35.623673   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 15:55:35.631654   30316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 15:55:35.635527   30316 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 15:55:35.635576   30316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 15:55:35.640871   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 15:55:35.648249   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 15:55:35.655974   30316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 15:55:35.660049   30316 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 15:55:35.660093   30316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 15:55:35.665541   30316 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 15:55:35.672843   30316 kubeadm.go:395] StartCluster: {Name:default-k8s-different-port-20220728155420-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:default-k8s-different-port-20220728155420-1292
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 15:55:35.672943   30316 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:55:35.701097   30316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 15:55:35.708861   30316 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 15:55:35.708875   30316 kubeadm.go:626] restartCluster start
	I0728 15:55:35.708918   30316 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 15:55:35.715675   30316 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:35.715731   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 15:55:35.792225   30316 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220728155420-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 15:55:35.792426   30316 kubeconfig.go:127] "default-k8s-different-port-20220728155420-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 15:55:35.792849   30316 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 15:55:35.794157   30316 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 15:55:35.802225   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:35.802292   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:35.810745   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.012529   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.012639   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.023340   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.210864   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.211001   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.220628   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.411346   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.411538   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.421878   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.611205   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.611335   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.621648   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:36.812972   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:36.813075   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:36.823163   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.012867   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.013068   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.023408   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.212893   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.213035   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.223601   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.412922   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.413087   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.423314   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.612969   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.613078   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.623490   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:37.812353   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:37.812456   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:37.823336   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.010830   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.010916   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.019821   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.212896   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.213005   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.223210   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.410813   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.410939   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.420711   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.611654   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.611825   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.622545   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.811275   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.811362   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.821118   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.821127   30316 api_server.go:165] Checking apiserver status ...
	I0728 15:55:38.821175   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 15:55:38.830270   30316 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.830283   30316 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 15:55:38.830287   30316 kubeadm.go:1092] stopping kube-system containers ...
	I0728 15:55:38.830345   30316 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 15:55:38.860913   30316 docker.go:443] Stopping containers: [7ec6c6148c24 fa1f81d73248 17fdf8663ffe 2fdb37a458e2 b90b24e0cb1f c98d3d66b2c3 d9963c325023 0204554478fa ab27dfcb31e5 49d7071af640 288680d90206 651e3092e073 ac40e1ec26cd 9ab0cfc84627 c85c92139dea 9a3c98b2d6cb]
	I0728 15:55:38.860989   30316 ssh_runner.go:195] Run: docker stop 7ec6c6148c24 fa1f81d73248 17fdf8663ffe 2fdb37a458e2 b90b24e0cb1f c98d3d66b2c3 d9963c325023 0204554478fa ab27dfcb31e5 49d7071af640 288680d90206 651e3092e073 ac40e1ec26cd 9ab0cfc84627 c85c92139dea 9a3c98b2d6cb
	I0728 15:55:38.890667   30316 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 15:55:38.900785   30316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 15:55:38.907947   30316 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 28 22:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 28 22:54 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Jul 28 22:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 28 22:54 /etc/kubernetes/scheduler.conf
	
	I0728 15:55:38.908005   30316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0728 15:55:38.915307   30316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0728 15:55:38.922584   30316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0728 15:55:38.929622   30316 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.929673   30316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 15:55:38.936409   30316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0728 15:55:38.943336   30316 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:55:38.943378   30316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 15:55:38.950159   30316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 15:55:38.957430   30316 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 15:55:38.957440   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:39.001063   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:39.978959   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:40.156999   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:40.205624   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:40.261056   30316 api_server.go:51] waiting for apiserver process to appear ...
	I0728 15:55:40.261121   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:55:40.793843   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:55:41.293947   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:55:41.312913   30316 api_server.go:71] duration metric: took 1.051877628s to wait for apiserver process to appear ...
	I0728 15:55:41.312925   30316 api_server.go:87] waiting for apiserver healthz status ...
	I0728 15:55:41.312937   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:41.314088   30316 api_server.go:256] stopped: https://127.0.0.1:59515/healthz: Get "https://127.0.0.1:59515/healthz": EOF
	I0728 15:55:41.814307   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:44.109812   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 15:55:44.109835   30316 api_server.go:102] status: https://127.0.0.1:59515/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 15:55:44.314142   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:44.320716   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:55:44.320735   30316 api_server.go:102] status: https://127.0.0.1:59515/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:55:44.814131   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:44.819883   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 15:55:44.819900   30316 api_server.go:102] status: https://127.0.0.1:59515/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 15:55:45.316185   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 15:55:45.323137   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 200:
	ok
	I0728 15:55:45.329031   30316 api_server.go:140] control plane version: v1.24.3
	I0728 15:55:45.329042   30316 api_server.go:130] duration metric: took 4.016180474s to wait for apiserver health ...
	I0728 15:55:45.329048   30316 cni.go:95] Creating CNI manager for ""
	I0728 15:55:45.329052   30316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 15:55:45.329064   30316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 15:55:45.335966   30316 system_pods.go:59] 8 kube-system pods found
	I0728 15:55:45.335982   30316 system_pods.go:61] "coredns-6d4b75cb6d-p47tc" [097a4ddd-127a-4d76-9ef2-b31856680a61] Running
	I0728 15:55:45.335987   30316 system_pods.go:61] "etcd-default-k8s-different-port-20220728155420-12923" [b3af4c0d-6e4a-4de2-94a7-6f0e9804c43e] Running
	I0728 15:55:45.335992   30316 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220728155420-12923" [915ee0d1-3a30-4488-a8fc-a2fd46ff53dc] Running
	I0728 15:55:45.335999   30316 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220728155420-12923" [6697437d-e349-4688-91e7-6755001fc84c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 15:55:45.336004   30316 system_pods.go:61] "kube-proxy-nbrlj" [9c349fd7-0054-4e68-8374-3d4ccfb14b9d] Running
	I0728 15:55:45.336008   30316 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220728155420-12923" [f7033e33-c299-41ac-b929-b557f088bd55] Running
	I0728 15:55:45.336013   30316 system_pods.go:61] "metrics-server-5c6f97fb75-q8trj" [880b41fa-bdc2-4c65-b3c0-05c1487607d7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 15:55:45.336019   30316 system_pods.go:61] "storage-provisioner" [18cb92a5-43f3-4ec9-aa95-d82651b00937] Running
	I0728 15:55:45.336023   30316 system_pods.go:74] duration metric: took 6.955342ms to wait for pod list to return data ...
	I0728 15:55:45.336030   30316 node_conditions.go:102] verifying NodePressure condition ...
	I0728 15:55:45.339315   30316 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 15:55:45.339330   30316 node_conditions.go:123] node cpu capacity is 6
	I0728 15:55:45.339339   30316 node_conditions.go:105] duration metric: took 3.304859ms to run NodePressure ...
	I0728 15:55:45.339355   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 15:55:45.458228   30316 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0728 15:55:45.462563   30316 kubeadm.go:777] kubelet initialised
	I0728 15:55:45.462573   30316 kubeadm.go:778] duration metric: took 4.332569ms waiting for restarted kubelet to initialise ...
	I0728 15:55:45.462580   30316 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:55:45.466883   30316 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-p47tc" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.471320   30316 pod_ready.go:92] pod "coredns-6d4b75cb6d-p47tc" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:45.471328   30316 pod_ready.go:81] duration metric: took 4.434775ms waiting for pod "coredns-6d4b75cb6d-p47tc" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.471334   30316 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.475851   30316 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:45.475859   30316 pod_ready.go:81] duration metric: took 4.521291ms waiting for pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.475864   30316 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.480433   30316 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:45.480440   30316 pod_ready.go:81] duration metric: took 4.572466ms waiting for pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:45.480448   30316 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:47.739926   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:49.740311   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:52.239842   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:54.741106   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:57.241016   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:59.242030   30316 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"False"
	I0728 15:55:59.739349   30316 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:59.739360   30316 pod_ready.go:81] duration metric: took 14.259145114s waiting for pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.739367   30316 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nbrlj" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.743616   30316 pod_ready.go:92] pod "kube-proxy-nbrlj" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:59.743625   30316 pod_ready.go:81] duration metric: took 4.252558ms waiting for pod "kube-proxy-nbrlj" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.743630   30316 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.747906   30316 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 15:55:59.747914   30316 pod_ready.go:81] duration metric: took 4.279711ms waiting for pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 15:55:59.747923   30316 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace to be "Ready" ...
	I0728 15:56:01.759363   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:04.256978   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:06.261199   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:08.757310   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:10.759539   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:13.256936   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:15.259376   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:17.260510   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:19.756798   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:21.758954   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:24.260530   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:26.758177   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:28.759402   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:31.258712   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:33.259127   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:35.259789   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:37.761287   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:40.257352   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:42.259524   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:44.260773   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:46.758827   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:48.760537   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:51.260493   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:53.756371   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:55.759549   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:56:58.257034   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:00.259021   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:02.267501   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:04.760386   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:07.257438   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:09.758070   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:12.259544   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:14.757376   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:16.758596   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:19.258413   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:21.258981   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:23.758572   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:26.257652   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:28.759470   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:30.759647   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:33.259035   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:35.756810   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:38.255923   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:40.256087   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:42.257292   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:44.257582   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:46.759216   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:49.256554   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:51.259323   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:53.757697   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:56.259190   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:57:58.756634   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:00.758211   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:03.256203   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:05.757714   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:08.255918   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:10.757402   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:13.256697   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:15.759116   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:18.254886   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:20.256061   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:22.256139   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:24.758663   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:27.256244   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:29.258507   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:31.755489   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:33.758645   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:36.256430   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:38.757168   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:40.757635   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:43.255776   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:45.255876   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:47.256269   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:49.256281   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:51.756592   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:54.255665   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:56.756610   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:58:59.255251   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:01.255321   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:03.257227   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:05.756051   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:08.255467   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:10.256148   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:12.755028   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:14.756329   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:16.756937   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:19.253884   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:21.255451   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:23.756009   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:25.757047   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:28.257085   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:30.755577   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:33.255299   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:35.257914   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:37.757572   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:40.256191   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:42.256681   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:44.754921   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:46.755973   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:48.756572   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:50.756662   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:53.253971   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:55.256336   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:57.754071   30316 pod_ready.go:102] pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace has status "Ready":"False"
	I0728 15:59:59.748516   30316 pod_ready.go:81] duration metric: took 4m0.004577503s waiting for pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace to be "Ready" ...
	E0728 15:59:59.748542   30316 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-5c6f97fb75-q8trj" in "kube-system" namespace to be "Ready" (will not retry!)
	I0728 15:59:59.748610   30316 pod_ready.go:38] duration metric: took 4m14.290256677s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 15:59:59.748647   30316 kubeadm.go:630] restartCluster took 4m24.044163545s
	W0728 15:59:59.748768   30316 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0728 15:59:59.748795   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0728 16:00:02.104339   30316 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2.355568477s)
	I0728 16:00:02.104398   30316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 16:00:02.113958   30316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 16:00:02.121438   30316 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0728 16:00:02.121481   30316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 16:00:02.129052   30316 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0728 16:00:02.129078   30316 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0728 16:00:02.411869   30316 out.go:204]   - Generating certificates and keys ...
	I0728 16:00:03.414591   30316 out.go:204]   - Booting up control plane ...
	I0728 16:00:10.524898   30316 out.go:204]   - Configuring RBAC rules ...
	I0728 16:00:10.902674   30316 cni.go:95] Creating CNI manager for ""
	I0728 16:00:10.902686   30316 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 16:00:10.902703   30316 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 16:00:10.902792   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:10.902802   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.0 minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551 minikube.k8s.io/name=default-k8s-different-port-20220728155420-12923 minikube.k8s.io/updated_at=2022_07_28T16_00_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:10.913574   30316 ops.go:34] apiserver oom_adj: -16
	I0728 16:00:11.094279   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:11.648292   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:12.148251   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:12.648564   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:13.148181   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:13.648060   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:14.149495   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:14.648613   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:15.148181   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:15.648223   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:16.147977   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:16.648059   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:17.148468   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:17.648414   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:18.148930   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:18.648014   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:19.148979   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:19.649861   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:20.148179   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:20.647929   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:21.148951   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:21.649324   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:22.149231   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:22.648610   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:23.149446   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:23.647962   30316 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0728 16:00:23.790473   30316 kubeadm.go:1045] duration metric: took 12.887965506s to wait for elevateKubeSystemPrivileges.
	I0728 16:00:23.790492   30316 kubeadm.go:397] StartCluster complete in 4m48.122451346s
	I0728 16:00:23.790510   30316 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:00:23.790593   30316 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 16:00:23.791159   30316 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:00:24.307591   30316 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220728155420-12923" rescaled to 1
	I0728 16:00:24.307627   30316 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8444 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 16:00:24.307647   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 16:00:24.307667   30316 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0728 16:00:24.307873   30316 config.go:178] Loaded profile config "default-k8s-different-port-20220728155420-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 16:00:24.332263   30316 out.go:177] * Verifying Kubernetes components...
	I0728 16:00:24.332328   30316 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.332327   30316 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.354025   30316 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.354042   30316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 16:00:24.354056   30316 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.332334   30316 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220728155420-12923"
	W0728 16:00:24.354084   30316 addons.go:162] addon storage-provisioner should already be in state true
	I0728 16:00:24.332359   30316 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.354099   30316 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220728155420-12923"
	W0728 16:00:24.354103   30316 addons.go:162] addon metrics-server should already be in state true
	I0728 16:00:24.354085   30316 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220728155420-12923"
	W0728 16:00:24.354129   30316 addons.go:162] addon dashboard should already be in state true
	I0728 16:00:24.354132   30316 host.go:66] Checking if "default-k8s-different-port-20220728155420-12923" exists ...
	I0728 16:00:24.354131   30316 host.go:66] Checking if "default-k8s-different-port-20220728155420-12923" exists ...
	I0728 16:00:24.354155   30316 host.go:66] Checking if "default-k8s-different-port-20220728155420-12923" exists ...
	I0728 16:00:24.354401   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.354528   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.355541   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.355684   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.390460   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.390458   30316 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0728 16:00:24.535958   30316 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0728 16:00:24.520667   30316 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:24.567843   30316 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220728155420-12923" to be "Ready" ...
	I0728 16:00:24.573082   30316 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0728 16:00:24.573099   30316 addons.go:162] addon default-storageclass should already be in state true
	I0728 16:00:24.635787   30316 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0728 16:00:24.635861   30316 host.go:66] Checking if "default-k8s-different-port-20220728155420-12923" exists ...
	I0728 16:00:24.639025   30316 node_ready.go:49] node "default-k8s-different-port-20220728155420-12923" has status "Ready":"True"
	I0728 16:00:24.657058   30316 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0728 16:00:24.678275   30316 node_ready.go:38] duration metric: took 42.466015ms waiting for node "default-k8s-different-port-20220728155420-12923" to be "Ready" ...
	I0728 16:00:24.737012   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0728 16:00:24.737029   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0728 16:00:24.737022   30316 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 16:00:24.678484   30316 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 16:00:24.679284   30316 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220728155420-12923 --format={{.State.Status}}
	I0728 16:00:24.737050   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 16:00:24.715939   30316 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0728 16:00:24.737094   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.737093   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0728 16:00:24.737131   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.737171   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.746547   30316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:24.836630   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 16:00:24.840168   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 16:00:24.841441   30316 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 16:00:24.841451   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 16:00:24.841523   30316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220728155420-12923
	I0728 16:00:24.841520   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 16:00:24.910866   30316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59511 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/default-k8s-different-port-20220728155420-12923/id_rsa Username:docker}
	I0728 16:00:24.997852   30316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 16:00:25.003279   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0728 16:00:25.003293   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0728 16:00:25.093643   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0728 16:00:25.093667   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0728 16:00:25.113432   30316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 16:00:25.118821   30316 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0728 16:00:25.118833   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0728 16:00:25.202782   30316 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0728 16:00:25.202797   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0728 16:00:25.207372   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0728 16:00:25.207387   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0728 16:00:25.283674   30316 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 16:00:25.283692   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0728 16:00:25.299996   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0728 16:00:25.300010   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0728 16:00:25.401995   30316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 16:00:25.411240   30316 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.020728929s)
	I0728 16:00:25.411263   30316 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0728 16:00:25.482536   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0728 16:00:25.482550   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0728 16:00:25.502802   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0728 16:00:25.502823   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0728 16:00:25.586073   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0728 16:00:25.586092   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0728 16:00:25.689008   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0728 16:00:25.689025   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0728 16:00:25.705569   30316 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 16:00:25.705582   30316 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0728 16:00:25.723569   30316 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 16:00:25.991093   30316 addons.go:383] Verifying addon metrics-server=true in "default-k8s-different-port-20220728155420-12923"
	I0728 16:00:26.705744   30316 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0728 16:00:26.762993   30316 addons.go:414] enableAddons completed in 2.455353871s
	I0728 16:00:26.767417   30316 pod_ready.go:102] pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace has status "Ready":"False"
	I0728 16:00:29.265572   30316 pod_ready.go:102] pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace has status "Ready":"False"
	I0728 16:00:31.779396   30316 pod_ready.go:97] error getting pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-nmb74" not found
	I0728 16:00:31.779412   30316 pod_ready.go:81] duration metric: took 7.03295771s waiting for pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace to be "Ready" ...
	E0728 16:00:31.779420   30316 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-6d4b75cb6d-nmb74" in "kube-system" namespace (skipping!): pods "coredns-6d4b75cb6d-nmb74" not found
	I0728 16:00:31.779425   30316 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-vm6w7" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.784329   30316 pod_ready.go:92] pod "coredns-6d4b75cb6d-vm6w7" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.784338   30316 pod_ready.go:81] duration metric: took 4.908258ms waiting for pod "coredns-6d4b75cb6d-vm6w7" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.784346   30316 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.788691   30316 pod_ready.go:92] pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.788700   30316 pod_ready.go:81] duration metric: took 4.337288ms waiting for pod "etcd-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.788706   30316 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.793360   30316 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.793369   30316 pod_ready.go:81] duration metric: took 4.65919ms waiting for pod "kube-apiserver-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.793376   30316 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.798075   30316 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.798084   30316 pod_ready.go:81] duration metric: took 4.703602ms waiting for pod "kube-controller-manager-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.798090   30316 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pv62j" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.978720   30316 pod_ready.go:92] pod "kube-proxy-pv62j" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:31.978732   30316 pod_ready.go:81] duration metric: took 180.639115ms waiting for pod "kube-proxy-pv62j" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:31.978741   30316 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:32.383133   30316 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace has status "Ready":"True"
	I0728 16:00:32.383144   30316 pod_ready.go:81] duration metric: took 404.402924ms waiting for pod "kube-scheduler-default-k8s-different-port-20220728155420-12923" in "kube-system" namespace to be "Ready" ...
	I0728 16:00:32.383151   30316 pod_ready.go:38] duration metric: took 7.646241882s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0728 16:00:32.383168   30316 api_server.go:51] waiting for apiserver process to appear ...
	I0728 16:00:32.383219   30316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:00:32.394067   30316 api_server.go:71] duration metric: took 8.086551332s to wait for apiserver process to appear ...
	I0728 16:00:32.394085   30316 api_server.go:87] waiting for apiserver healthz status ...
	I0728 16:00:32.394093   30316 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59515/healthz ...
	I0728 16:00:32.399509   30316 api_server.go:266] https://127.0.0.1:59515/healthz returned 200:
	ok
	I0728 16:00:32.400740   30316 api_server.go:140] control plane version: v1.24.3
	I0728 16:00:32.400749   30316 api_server.go:130] duration metric: took 6.659375ms to wait for apiserver health ...
	I0728 16:00:32.400754   30316 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 16:00:32.580287   30316 system_pods.go:59] 8 kube-system pods found
	I0728 16:00:32.580302   30316 system_pods.go:61] "coredns-6d4b75cb6d-vm6w7" [00979eea-4984-40c4-9975-4e9ef9c33a1f] Running
	I0728 16:00:32.580306   30316 system_pods.go:61] "etcd-default-k8s-different-port-20220728155420-12923" [8acc6fb5-6fbb-4eb7-ad74-bc24bde492ae] Running
	I0728 16:00:32.580310   30316 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220728155420-12923" [1d24c376-e9f9-4dde-bf95-9c7f4a5ff6de] Running
	I0728 16:00:32.580314   30316 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220728155420-12923" [8fa6e4db-daea-4dcc-a3d3-ad78a001c2e7] Running
	I0728 16:00:32.580317   30316 system_pods.go:61] "kube-proxy-pv62j" [25bad633-52dd-438c-ade8-4b59d566d336] Running
	I0728 16:00:32.580321   30316 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220728155420-12923" [9243a062-f92f-4f4c-8388-a47f60b2b439] Running
	I0728 16:00:32.580329   30316 system_pods.go:61] "metrics-server-5c6f97fb75-58sqm" [55c181c1-c5da-4dbb-8b61-2522d22261f4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 16:00:32.580343   30316 system_pods.go:61] "storage-provisioner" [f0b7ea65-aadc-49a7-8498-f541759e61a9] Running
	I0728 16:00:32.580348   30316 system_pods.go:74] duration metric: took 179.593374ms to wait for pod list to return data ...
	I0728 16:00:32.580353   30316 default_sa.go:34] waiting for default service account to be created ...
	I0728 16:00:32.760390   30316 default_sa.go:45] found service account: "default"
	I0728 16:00:32.760401   30316 default_sa.go:55] duration metric: took 180.047618ms for default service account to be created ...
	I0728 16:00:32.760406   30316 system_pods.go:116] waiting for k8s-apps to be running ...
	I0728 16:00:32.965098   30316 system_pods.go:86] 8 kube-system pods found
	I0728 16:00:32.965112   30316 system_pods.go:89] "coredns-6d4b75cb6d-vm6w7" [00979eea-4984-40c4-9975-4e9ef9c33a1f] Running
	I0728 16:00:32.965116   30316 system_pods.go:89] "etcd-default-k8s-different-port-20220728155420-12923" [8acc6fb5-6fbb-4eb7-ad74-bc24bde492ae] Running
	I0728 16:00:32.965120   30316 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20220728155420-12923" [1d24c376-e9f9-4dde-bf95-9c7f4a5ff6de] Running
	I0728 16:00:32.965124   30316 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20220728155420-12923" [8fa6e4db-daea-4dcc-a3d3-ad78a001c2e7] Running
	I0728 16:00:32.965127   30316 system_pods.go:89] "kube-proxy-pv62j" [25bad633-52dd-438c-ade8-4b59d566d336] Running
	I0728 16:00:32.965132   30316 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20220728155420-12923" [9243a062-f92f-4f4c-8388-a47f60b2b439] Running
	I0728 16:00:32.965138   30316 system_pods.go:89] "metrics-server-5c6f97fb75-58sqm" [55c181c1-c5da-4dbb-8b61-2522d22261f4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 16:00:32.965143   30316 system_pods.go:89] "storage-provisioner" [f0b7ea65-aadc-49a7-8498-f541759e61a9] Running
	I0728 16:00:32.965147   30316 system_pods.go:126] duration metric: took 204.741682ms to wait for k8s-apps to be running ...
	I0728 16:00:32.965153   30316 system_svc.go:44] waiting for kubelet service to be running ....
	I0728 16:00:32.965202   30316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 16:00:32.986222   30316 system_svc.go:56] duration metric: took 21.063891ms WaitForService to wait for kubelet.
	I0728 16:00:32.986237   30316 kubeadm.go:572] duration metric: took 8.678736077s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0728 16:00:32.986251   30316 node_conditions.go:102] verifying NodePressure condition ...
	I0728 16:00:33.178411   30316 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 16:00:33.178425   30316 node_conditions.go:123] node cpu capacity is 6
	I0728 16:00:33.178437   30316 node_conditions.go:105] duration metric: took 192.177388ms to run NodePressure ...
	I0728 16:00:33.178453   30316 start.go:216] waiting for startup goroutines ...
	I0728 16:00:33.214457   30316 start.go:506] kubectl: 1.24.2, cluster: 1.24.3 (minor skew: 0)
	I0728 16:00:33.238006   30316 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20220728155420-12923" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 22:55:32 UTC, end at Thu 2022-07-28 23:01:27 UTC. --
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.207377822Z" level=info msg="ignoring event" container=e4f6394e3c8557326ba43e6c8ca6f19bfff1e124d52845e06869638d3e633d97 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.275029814Z" level=info msg="ignoring event" container=175124091cbe84068232dd9d624db0f5cdf618007ea8656aff90201e990c2268 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.342483610Z" level=info msg="ignoring event" container=2f023feef52c478633d427518539315695a740777ffae167493d3b1372350cf3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.433721908Z" level=info msg="ignoring event" container=fa03954d2288eab9eb6f304bab496c4cc114656ec8a90c65f683f2ea5f8d18d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.501846169Z" level=info msg="ignoring event" container=6bac6aba3bf9a76a101dda8b021ad6811f33b8f9cac13b1f81324fb94b644b6e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.575626999Z" level=info msg="ignoring event" container=09abd750246d2a4978b1127cf196aca26a803840822bec351d3c98b4654d2871 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.648041486Z" level=info msg="ignoring event" container=a8fa1df66b25ac2a8104871f83080c4d0f37631394f24a9cb5e02b2077bdb55d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:01 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:01.787047193Z" level=info msg="ignoring event" container=ed4e00e95ddcd163fee6a5aa45e06076cfeadbbd67e89539733742977af388df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:26 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:26.854448762Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:00:26 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:26.854495261Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:00:26 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:26.855700145Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:00:28 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:28.259519307Z" level=warning msg="reference for unknown type: " digest="sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3" remote="docker.io/kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3"
	Jul 28 23:00:30 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:30.436403882Z" level=info msg="ignoring event" container=6a5caa98c9865bb244e15a0cfd68d280c61cb122072bd49bc1ee7bbaa8293f73 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:30 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:30.517228141Z" level=info msg="ignoring event" container=bef75a25eb7e6cc1518887a9e2014aef4c41e855f0a0f3fb4acbc51837379f90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:34 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:34.831105792Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jul 28 23:00:35 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:35.114544704Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jul 28 23:00:38 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:38.120610373Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:00:38 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:38.120659439Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:00:38 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:38.187523181Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:00:38 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:38.334370351Z" level=info msg="ignoring event" container=aad3ab43569540d94592e1258714a770d72430ad5a8526edc39c8aec81f39d53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:00:38 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:00:38.558528169Z" level=info msg="ignoring event" container=02b51e5ec9ed0854e334add4f068e6e1a0777b05b3d76db4fc171310ad19ae80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:01:24 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:01:24.404768275Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:01:24 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:01:24.404813243Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:01:24 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:01:24.489858260Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host"
	Jul 28 23:01:24 default-k8s-different-port-20220728155420-12923 dockerd[525]: time="2022-07-28T23:01:24.856300116Z" level=info msg="ignoring event" container=ed96c2fb779ffd97feb9832c8208ab139b28264c81606c936e8faf8283c68d02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID
	ed96c2fb779ff       a90209bb39e3d                                                                                    3 seconds ago        Exited              dashboard-metrics-scraper   2                   015bf6710e787
	9e8e6f7bfaeb7       kubernetesui/dashboard@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3   53 seconds ago       Running             kubernetes-dashboard        0                   d3e1af9348dad
	7da6a2c799210       6e38f40d628db                                                                                    About a minute ago   Running             storage-provisioner         0                   f7839ec11aa40
	e9059bdd51b3e       a4ca41631cc7a                                                                                    About a minute ago   Running             coredns                     0                   3110a60df04ed
	fb4ee731eaddc       2ae1ba6417cbc                                                                                    About a minute ago   Running             kube-proxy                  0                   650b13da5b1ca
	f6ba3dae21920       d521dd763e2e3                                                                                    About a minute ago   Running             kube-apiserver              0                   2d0741186520e
	612835ba363da       aebe758cef4cd                                                                                    About a minute ago   Running             etcd                        0                   da27fa6a56a5f
	8986c1e4a8052       586c112956dfc                                                                                    About a minute ago   Running             kube-controller-manager     0                   58dc75b4ddf15
	c1b00dc8df258       3a5aa3a515f5d                                                                                    About a minute ago   Running             kube-scheduler              0                   7fb35b1eaeced
	
	* 
	* ==> coredns [e9059bdd51b3] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220728155420-12923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220728155420-12923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
	                    minikube.k8s.io/name=default-k8s-different-port-20220728155420-12923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_28T16_00_10_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 23:00:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220728155420-12923
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 23:01:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Jul 2022 23:01:20 +0000   Thu, 28 Jul 2022 23:00:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Jul 2022 23:01:20 +0000   Thu, 28 Jul 2022 23:00:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Jul 2022 23:01:20 +0000   Thu, 28 Jul 2022 23:00:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Jul 2022 23:01:20 +0000   Thu, 28 Jul 2022 23:01:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    default-k8s-different-port-20220728155420-12923
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                d8d926e1-386e-4950-b4f3-f8e0acfc6b16
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-vm6w7                                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-default-k8s-different-port-20220728155420-12923                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220728155420-12923             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220728155420-12923    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-pv62j                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220728155420-12923             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 metrics-server-5c6f97fb75-58sqm                                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-bgrbr                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-7sfms                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 63s                kube-proxy       
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  83s (x4 over 83s)  kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x4 over 83s)  kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x3 over 83s)  kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 77s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  77s                kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s                kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s                kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasSufficientPID
	  Normal  NodeReady                66s                kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeReady
	  Normal  RegisteredNode           64s                node-controller  Node default-k8s-different-port-20220728155420-12923 event: Registered Node default-k8s-different-port-20220728155420-12923 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7s                 kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             7s                 kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeNotReady
	  Normal  NodeReady                7s                 kubelet          Node default-k8s-different-port-20220728155420-12923 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [612835ba363d] <==
	* {"level":"info","ts":"2022-07-28T23:00:05.036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-07-28T23:00:05.036Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-07-28T23:00:05.038Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T23:00:05.038Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-28T23:00:05.038Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T23:00:05.038Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T23:00:05.038Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:default-k8s-different-port-20220728155420-12923 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T23:00:05.932Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T23:00:05.933Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:00:05.933Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:00:05.933Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:00:05.933Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T23:00:05.936Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-07-28T23:00:05.938Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T23:00:05.938Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  23:01:28 up  1:22,  0 users,  load average: 0.59, 0.96, 1.04
	Linux default-k8s-different-port-20220728155420-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [f6ba3dae2192] <==
	* I0728 23:00:09.870432       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0728 23:00:10.745451       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0728 23:00:10.751123       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0728 23:00:10.758975       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0728 23:00:10.844768       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0728 23:00:23.292163       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0728 23:00:23.504127       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0728 23:00:23.953823       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0728 23:00:25.936348       1 alloc.go:327] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.103.198.26]
	I0728 23:00:26.631910       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.103.124.158]
	I0728 23:00:26.642072       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.109.153.185]
	W0728 23:00:26.831315       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 23:00:26.831362       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0728 23:00:26.831406       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 23:00:26.831421       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 23:00:26.831430       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0728 23:00:26.832496       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 23:01:26.788460       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 23:01:26.788500       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0728 23:01:26.788506       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 23:01:26.788687       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 23:01:26.788718       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0728 23:01:26.790592       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [8986c1e4a805] <==
	* I0728 23:00:23.949342       1 shared_informer.go:262] Caches are synced for garbage collector
	I0728 23:00:23.949469       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0728 23:00:23.955724       1 shared_informer.go:262] Caches are synced for garbage collector
	I0728 23:00:25.839040       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0728 23:00:25.843233       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0728 23:00:25.848594       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0728 23:00:25.893367       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-58sqm"
	I0728 23:00:26.503867       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0728 23:00:26.508393       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 23:00:26.513589       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 23:00:26.518621       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 23:00:26.518670       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 23:00:26.518811       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	E0728 23:00:26.524376       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 23:00:26.524416       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 23:00:26.526690       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 23:00:26.533093       1 replica_set.go:550] sync "kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" failed with pods "dashboard-metrics-scraper-dffd48c4c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 23:00:26.533145       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-dffd48c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0728 23:00:26.533295       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0728 23:00:26.538138       1 replica_set.go:550] sync "kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" failed with pods "kubernetes-dashboard-5fd5574d9f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0728 23:00:26.538478       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5fd5574d9f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0728 23:00:26.545339       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-7sfms"
	I0728 23:00:26.595094       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-bgrbr"
	E0728 23:01:20.471988       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0728 23:01:20.485480       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [fb4ee731eadd] <==
	* I0728 23:00:23.931427       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0728 23:00:23.931570       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0728 23:00:23.931618       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 23:00:23.949037       1 server_others.go:206] "Using iptables Proxier"
	I0728 23:00:23.949127       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 23:00:23.949148       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 23:00:23.949165       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 23:00:23.949199       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 23:00:23.949337       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 23:00:23.949550       1 server.go:661] "Version info" version="v1.24.3"
	I0728 23:00:23.949610       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 23:00:23.949990       1 config.go:317] "Starting service config controller"
	I0728 23:00:23.950030       1 config.go:226] "Starting endpoint slice config controller"
	I0728 23:00:23.950038       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 23:00:23.950038       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 23:00:23.950942       1 config.go:444] "Starting node config controller"
	I0728 23:00:23.951074       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 23:00:24.050916       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 23:00:24.051657       1 shared_informer.go:262] Caches are synced for node config
	I0728 23:00:24.051744       1 shared_informer.go:262] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [c1b00dc8df25] <==
	* W0728 23:00:07.820022       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0728 23:00:07.820079       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0728 23:00:07.820147       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0728 23:00:07.820159       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0728 23:00:07.820275       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0728 23:00:07.820305       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0728 23:00:07.820372       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0728 23:00:07.820414       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0728 23:00:07.820291       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0728 23:00:07.820448       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0728 23:00:07.820492       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0728 23:00:07.820501       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0728 23:00:07.820756       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0728 23:00:07.820767       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0728 23:00:07.821156       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0728 23:00:07.821167       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0728 23:00:07.823953       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0728 23:00:07.823967       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0728 23:00:07.824222       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0728 23:00:07.824257       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0728 23:00:08.716475       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0728 23:00:08.716525       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0728 23:00:08.962353       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0728 23:00:08.962449       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0728 23:00:11.617726       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 22:55:32 UTC, end at Thu 2022-07-28 23:01:28 UTC. --
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957187    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6wb6\" (UniqueName: \"kubernetes.io/projected/00979eea-4984-40c4-9975-4e9ef9c33a1f-kube-api-access-z6wb6\") pod \"coredns-6d4b75cb6d-vm6w7\" (UID: \"00979eea-4984-40c4-9975-4e9ef9c33a1f\") " pod="kube-system/coredns-6d4b75cb6d-vm6w7"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957200    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25bad633-52dd-438c-ade8-4b59d566d336-xtables-lock\") pod \"kube-proxy-pv62j\" (UID: \"25bad633-52dd-438c-ade8-4b59d566d336\") " pod="kube-system/kube-proxy-pv62j"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957215    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/98861945-a85a-4fc2-8f87-03ab3cd624cf-tmp-volume\") pod \"kubernetes-dashboard-5fd5574d9f-7sfms\" (UID: \"98861945-a85a-4fc2-8f87-03ab3cd624cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-7sfms"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957229    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbp5n\" (UniqueName: \"kubernetes.io/projected/f0b7ea65-aadc-49a7-8498-f541759e61a9-kube-api-access-pbp5n\") pod \"storage-provisioner\" (UID: \"f0b7ea65-aadc-49a7-8498-f541759e61a9\") " pod="kube-system/storage-provisioner"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957245    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bkzm\" (UniqueName: \"kubernetes.io/projected/98861945-a85a-4fc2-8f87-03ab3cd624cf-kube-api-access-6bkzm\") pod \"kubernetes-dashboard-5fd5574d9f-7sfms\" (UID: \"98861945-a85a-4fc2-8f87-03ab3cd624cf\") " pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-7sfms"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957263    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99j6q\" (UniqueName: \"kubernetes.io/projected/6059b3a5-1140-4e97-b06a-44811e5c5844-kube-api-access-99j6q\") pod \"dashboard-metrics-scraper-dffd48c4c-bgrbr\" (UID: \"6059b3a5-1140-4e97-b06a-44811e5c5844\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-bgrbr"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957279    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/00979eea-4984-40c4-9975-4e9ef9c33a1f-config-volume\") pod \"coredns-6d4b75cb6d-vm6w7\" (UID: \"00979eea-4984-40c4-9975-4e9ef9c33a1f\") " pod="kube-system/coredns-6d4b75cb6d-vm6w7"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957294    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hbbj\" (UniqueName: \"kubernetes.io/projected/25bad633-52dd-438c-ade8-4b59d566d336-kube-api-access-8hbbj\") pod \"kube-proxy-pv62j\" (UID: \"25bad633-52dd-438c-ade8-4b59d566d336\") " pod="kube-system/kube-proxy-pv62j"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957309    9763 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25bad633-52dd-438c-ade8-4b59d566d336-kube-proxy\") pod \"kube-proxy-pv62j\" (UID: \"25bad633-52dd-438c-ade8-4b59d566d336\") " pod="kube-system/kube-proxy-pv62j"
	Jul 28 23:01:21 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:21.957317    9763 reconciler.go:157] "Reconciler: start to sync state"
	Jul 28 23:01:23 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:23.106932    9763 request.go:601] Waited for 1.049973974s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8444/api/v1/namespaces/kube-system/pods
	Jul 28 23:01:23 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:23.140676    9763 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-default-k8s-different-port-20220728155420-12923\" already exists" pod="kube-system/kube-scheduler-default-k8s-different-port-20220728155420-12923"
	Jul 28 23:01:23 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:23.322911    9763 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-default-k8s-different-port-20220728155420-12923\" already exists" pod="kube-system/kube-controller-manager-default-k8s-different-port-20220728155420-12923"
	Jul 28 23:01:23 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:23.566662    9763 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-default-k8s-different-port-20220728155420-12923\" already exists" pod="kube-system/etcd-default-k8s-different-port-20220728155420-12923"
	Jul 28 23:01:23 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:23.766053    9763 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-default-k8s-different-port-20220728155420-12923\" already exists" pod="kube-system/kube-apiserver-default-k8s-different-port-20220728155420-12923"
	Jul 28 23:01:24 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:24.490358    9763 remote_image.go:218] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 28 23:01:24 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:24.490411    9763 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Jul 28 23:01:24 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:24.490518    9763 kuberuntime_manager.go:905] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-f2lzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Prob
eHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fil
e,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-5c6f97fb75-58sqm_kube-system(55c181c1-c5da-4dbb-8b61-2522d22261f4): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host
	Jul 28 23:01:24 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:24.490546    9763 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.65.2:53: no such host\"" pod="kube-system/metrics-server-5c6f97fb75-58sqm" podUID=55c181c1-c5da-4dbb-8b61-2522d22261f4
	Jul 28 23:01:24 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:24.610392    9763 scope.go:110] "RemoveContainer" containerID="02b51e5ec9ed0854e334add4f068e6e1a0777b05b3d76db4fc171310ad19ae80"
	Jul 28 23:01:25 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:25.081204    9763 scope.go:110] "RemoveContainer" containerID="02b51e5ec9ed0854e334add4f068e6e1a0777b05b3d76db4fc171310ad19ae80"
	Jul 28 23:01:25 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:25.081448    9763 scope.go:110] "RemoveContainer" containerID="ed96c2fb779ffd97feb9832c8208ab139b28264c81606c936e8faf8283c68d02"
	Jul 28 23:01:25 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:25.081642    9763 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-bgrbr_kubernetes-dashboard(6059b3a5-1140-4e97-b06a-44811e5c5844)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-bgrbr" podUID=6059b3a5-1140-4e97-b06a-44811e5c5844
	Jul 28 23:01:27 default-k8s-different-port-20220728155420-12923 kubelet[9763]: I0728 23:01:27.018152    9763 scope.go:110] "RemoveContainer" containerID="ed96c2fb779ffd97feb9832c8208ab139b28264c81606c936e8faf8283c68d02"
	Jul 28 23:01:27 default-k8s-different-port-20220728155420-12923 kubelet[9763]: E0728 23:01:27.018677    9763 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-dffd48c4c-bgrbr_kubernetes-dashboard(6059b3a5-1140-4e97-b06a-44811e5c5844)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-bgrbr" podUID=6059b3a5-1140-4e97-b06a-44811e5c5844
	
	* 
	* ==> kubernetes-dashboard [9e8e6f7bfaeb] <==
	* 2022/07/28 23:00:34 Using namespace: kubernetes-dashboard
	2022/07/28 23:00:34 Using in-cluster config to connect to apiserver
	2022/07/28 23:00:34 Using secret token for csrf signing
	2022/07/28 23:00:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/07/28 23:00:34 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/07/28 23:00:34 Successful initial request to the apiserver, version: v1.24.3
	2022/07/28 23:00:34 Generating JWE encryption key
	2022/07/28 23:00:34 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/07/28 23:00:34 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/07/28 23:00:34 Initializing JWE encryption key from synchronized object
	2022/07/28 23:00:34 Creating in-cluster Sidecar client
	2022/07/28 23:00:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/28 23:00:34 Serving insecurely on HTTP port: 9090
	2022/07/28 23:01:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2022/07/28 23:00:34 Starting overwatch
	
	* 
	* ==> storage-provisioner [7da6a2c79921] <==
	* I0728 23:00:26.252475       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 23:00:26.297612       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 23:00:26.297713       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 23:00:26.305962       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 23:00:26.306163       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220728155420-12923_472ec422-e9ee-424f-a0a5-dbfdfe69d361!
	I0728 23:00:26.306472       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e074a991-8d27-48cc-bdc9-ac906f837298", APIVersion:"v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-different-port-20220728155420-12923_472ec422-e9ee-424f-a0a5-dbfdfe69d361 became leader
	I0728 23:00:26.407219       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-different-port-20220728155420-12923_472ec422-e9ee-424f-a0a5-dbfdfe69d361!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220728155420-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: metrics-server-5c6f97fb75-58sqm
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220728155420-12923 describe pod metrics-server-5c6f97fb75-58sqm
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220728155420-12923 describe pod metrics-server-5c6f97fb75-58sqm: exit status 1 (274.605628ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-58sqm" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220728155420-12923 describe pod metrics-server-5c6f97fb75-58sqm: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (43.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:01:52.830864   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:02:13.957106   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:02:37.912289   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:04:00.965914   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 16:04:07.104623   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:04:10.573974   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:05:07.822529   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
E0728 16:05:07.828993   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
E0728 16:05:07.839640   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
E0728 16:05:07.861932   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
E0728 16:05:07.903242   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
E0728 16:05:07.985484   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
E0728 16:05:08.146330   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
E0728 16:05:08.468415   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
E0728 16:05:09.108824   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
E0728 16:05:10.390136   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
E0728 16:05:12.951046   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:05:18.071336   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:05:28.313537   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:05:41.917465   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 16:05:43.544792   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 16:05:45.916098   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:05:48.793766   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:06:29.755409   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:06:52.827754   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:07:04.960952   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:07:13.953165   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:07:29.330337   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 16:07:37.986490   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:07:51.755975   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
E0728 16:07:52.034134   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:58973/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0728 16:08:04.837955   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0728 16:08:36.661072   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0728 16:09:07.182083   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0728 16:09:10.647533   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0728 16:10:07.897002   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0728 16:10:17.092071   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0728 16:10:35.597573   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/default-k8s-different-port-20220728155420-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0728 16:10:41.992012   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0728 16:10:43.621525   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0728 16:10:45.992448   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 2 (416.262951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-20220728153807-12923" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-20220728153807-12923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220728153807-12923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.989µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-20220728153807-12923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220728153807-12923
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220728153807-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f",
	        "Created": "2022-07-28T22:38:14.165684968Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 246485,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T22:43:51.426692673Z",
	            "FinishedAt": "2022-07-28T22:43:48.536711569Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/hosts",
	        "LogPath": "/var/lib/docker/containers/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f/2056d9a86a4ca08e1ec783c3ce5a34920343f59ecadd8b3d5a33c2a05952e44f-json.log",
	        "Name": "/old-k8s-version-20220728153807-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20220728153807-12923:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220728153807-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/352be77058f615864a907445aa12422ebe9fd025617aac2141ae82132058eefb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220728153807-12923",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220728153807-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220728153807-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220728153807-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "468350ddce385e27616eb7d67f293e8984e4658354bccab9cc7f747311c10282",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58969"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58970"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58971"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58972"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "58973"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/468350ddce38",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220728153807-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2056d9a86a4c",
	                        "old-k8s-version-20220728153807-12923"
	                    ],
	                    "NetworkID": "a0b55590b406427f4aa9e75be1fbe382dd54fa7a1c14e888e401b45bb478b32d",
	                    "EndpointID": "d3216ca95fb05fd9cb589a1b6ef0ebe5edfacf75863c36ec7c40cddaa73c1dc8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 2 (415.041844ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-20220728153807-12923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-20220728153807-12923 logs -n 25: (3.473500432s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220728155419-12923      | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | disable-driver-mounts-20220728155419-12923                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:00 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:00 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:01 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:01 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:01 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220728160133-12923 --memory=2200           | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:02 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220728160133-12923 --memory=2200           | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:03 PDT | 28 Jul 22 16:03 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:03 PDT | 28 Jul 22 16:03 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:03 PDT | 28 Jul 22 16:03 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 16:02:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 16:02:29.603987   31212 out.go:296] Setting OutFile to fd 1 ...
	I0728 16:02:29.604169   31212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 16:02:29.604175   31212 out.go:309] Setting ErrFile to fd 2...
	I0728 16:02:29.604179   31212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 16:02:29.604278   31212 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 16:02:29.604724   31212 out.go:303] Setting JSON to false
	I0728 16:02:29.620611   31212 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10391,"bootTime":1659038958,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 16:02:29.620715   31212 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 16:02:29.642836   31212 out.go:177] * [newest-cni-20220728160133-12923] minikube v1.26.0 on Darwin 12.5
	I0728 16:02:29.685952   31212 notify.go:193] Checking for updates...
	I0728 16:02:29.707624   31212 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 16:02:29.729048   31212 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 16:02:29.751059   31212 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 16:02:29.772938   31212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 16:02:29.794784   31212 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 16:02:29.817184   31212 config.go:178] Loaded profile config "newest-cni-20220728160133-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 16:02:29.817825   31212 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 16:02:29.884824   31212 docker.go:137] docker version: linux-20.10.17
	I0728 16:02:29.884958   31212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 16:02:30.016461   31212 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 23:02:29.951122434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 16:02:30.058952   31212 out.go:177] * Using the docker driver based on existing profile
	I0728 16:02:30.081202   31212 start.go:284] selected driver: docker
	I0728 16:02:30.081227   31212 start.go:808] validating driver "docker" against &{Name:newest-cni-20220728160133-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220728160133-12923 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 16:02:30.081399   31212 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 16:02:30.084734   31212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 16:02:30.215852   31212 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 23:02:30.151469441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 16:02:30.216047   31212 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0728 16:02:30.216068   31212 cni.go:95] Creating CNI manager for ""
	I0728 16:02:30.216080   31212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 16:02:30.216095   31212 start_flags.go:310] config:
	{Name:newest-cni-20220728160133-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220728160133-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 16:02:30.258426   31212 out.go:177] * Starting control plane node newest-cni-20220728160133-12923 in cluster newest-cni-20220728160133-12923
	I0728 16:02:30.279453   31212 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 16:02:30.300285   31212 out.go:177] * Pulling base image ...
	I0728 16:02:30.342972   31212 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 16:02:30.343034   31212 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 16:02:30.343067   31212 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 16:02:30.343091   31212 cache.go:57] Caching tarball of preloaded images
	I0728 16:02:30.343264   31212 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 16:02:30.343286   31212 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 16:02:30.344223   31212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/config.json ...
	I0728 16:02:30.408173   31212 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 16:02:30.408192   31212 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 16:02:30.408203   31212 cache.go:208] Successfully downloaded all kic artifacts
	I0728 16:02:30.408274   31212 start.go:370] acquiring machines lock for newest-cni-20220728160133-12923: {Name:mkde4349139da57471ff6865c409a88cc56837e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 16:02:30.408353   31212 start.go:374] acquired machines lock for "newest-cni-20220728160133-12923" in 60.272µs
	I0728 16:02:30.408375   31212 start.go:95] Skipping create...Using existing machine configuration
	I0728 16:02:30.408384   31212 fix.go:55] fixHost starting: 
	I0728 16:02:30.408637   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:30.472018   31212 fix.go:103] recreateIfNeeded on newest-cni-20220728160133-12923: state=Stopped err=<nil>
	W0728 16:02:30.472048   31212 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 16:02:30.515834   31212 out.go:177] * Restarting existing docker container for "newest-cni-20220728160133-12923" ...
	I0728 16:02:30.537952   31212 cli_runner.go:164] Run: docker start newest-cni-20220728160133-12923
	I0728 16:02:30.869264   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:30.933448   31212 kic.go:415] container "newest-cni-20220728160133-12923" state is running.
	I0728 16:02:30.934009   31212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220728160133-12923
	I0728 16:02:31.002987   31212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/config.json ...
	I0728 16:02:31.003389   31212 machine.go:88] provisioning docker machine ...
	I0728 16:02:31.003411   31212 ubuntu.go:169] provisioning hostname "newest-cni-20220728160133-12923"
	I0728 16:02:31.003476   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:31.070544   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:31.070746   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:31.070761   31212 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220728160133-12923 && echo "newest-cni-20220728160133-12923" | sudo tee /etc/hostname
	I0728 16:02:31.201548   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220728160133-12923
	
	I0728 16:02:31.201634   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:31.266510   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:31.266676   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:31.266694   31212 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220728160133-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220728160133-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220728160133-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 16:02:31.385812   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 16:02:31.385833   31212 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 16:02:31.385867   31212 ubuntu.go:177] setting up certificates
	I0728 16:02:31.385874   31212 provision.go:83] configureAuth start
	I0728 16:02:31.385943   31212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220728160133-12923
	I0728 16:02:31.451614   31212 provision.go:138] copyHostCerts
	I0728 16:02:31.451705   31212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 16:02:31.451715   31212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 16:02:31.451804   31212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 16:02:31.452003   31212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 16:02:31.452013   31212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 16:02:31.452087   31212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 16:02:31.452228   31212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 16:02:31.452233   31212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 16:02:31.452288   31212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 16:02:31.452406   31212 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220728160133-12923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220728160133-12923]
	I0728 16:02:31.570157   31212 provision.go:172] copyRemoteCerts
	I0728 16:02:31.570217   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 16:02:31.570260   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:31.633580   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:31.718874   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 16:02:31.736262   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0728 16:02:31.753572   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 16:02:31.770074   31212 provision.go:86] duration metric: configureAuth took 384.191515ms
	I0728 16:02:31.770089   31212 ubuntu.go:193] setting minikube options for container-runtime
	I0728 16:02:31.770242   31212 config.go:178] Loaded profile config "newest-cni-20220728160133-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 16:02:31.770295   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:31.834721   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:31.834895   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:31.834906   31212 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 16:02:31.957197   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 16:02:31.957210   31212 ubuntu.go:71] root file system type: overlay
	I0728 16:02:31.957365   31212 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 16:02:31.957437   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.021693   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:32.021854   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:32.021903   31212 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 16:02:32.149282   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 16:02:32.149364   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.213109   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:32.213274   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:32.213289   31212 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 16:02:32.338244   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 16:02:32.338264   31212 machine.go:91] provisioned docker machine in 1.334887211s
	I0728 16:02:32.338275   31212 start.go:307] post-start starting for "newest-cni-20220728160133-12923" (driver="docker")
	I0728 16:02:32.338281   31212 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 16:02:32.338339   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 16:02:32.338391   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.401686   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:32.486912   31212 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 16:02:32.490645   31212 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 16:02:32.490662   31212 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 16:02:32.490677   31212 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 16:02:32.490684   31212 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 16:02:32.490696   31212 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 16:02:32.490812   31212 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 16:02:32.490953   31212 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 16:02:32.491096   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 16:02:32.498304   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 16:02:32.516144   31212 start.go:310] post-start completed in 177.863755ms
	I0728 16:02:32.516223   31212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 16:02:32.516271   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.580139   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:32.665740   31212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 16:02:32.670469   31212 fix.go:57] fixHost completed within 2.262122262s
	I0728 16:02:32.670481   31212 start.go:82] releasing machines lock for "newest-cni-20220728160133-12923", held for 2.262159415s
	I0728 16:02:32.670550   31212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220728160133-12923
	I0728 16:02:32.734665   31212 ssh_runner.go:195] Run: systemctl --version
	I0728 16:02:32.734676   31212 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 16:02:32.734725   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.734770   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.803568   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:32.803742   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:33.085349   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 16:02:33.093438   31212 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0728 16:02:33.105957   31212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 16:02:33.176841   31212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0728 16:02:33.256944   31212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 16:02:33.267572   31212 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 16:02:33.267647   31212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 16:02:33.277112   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 16:02:33.289609   31212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 16:02:33.356813   31212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 16:02:33.432618   31212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 16:02:33.497815   31212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 16:02:33.738564   31212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 16:02:33.802407   31212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 16:02:33.879275   31212 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 16:02:33.888513   31212 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 16:02:33.888576   31212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 16:02:33.892665   31212 start.go:471] Will wait 60s for crictl version
	I0728 16:02:33.892723   31212 ssh_runner.go:195] Run: sudo crictl version
	I0728 16:02:33.921786   31212 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 16:02:33.921853   31212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 16:02:33.960795   31212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 16:02:34.049447   31212 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 16:02:34.049621   31212 cli_runner.go:164] Run: docker exec -t newest-cni-20220728160133-12923 dig +short host.docker.internal
	I0728 16:02:34.171213   31212 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 16:02:34.171311   31212 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 16:02:34.175390   31212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 16:02:34.184506   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:34.270359   31212 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0728 16:02:34.292818   31212 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 16:02:34.292978   31212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 16:02:34.324733   31212 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 16:02:34.324750   31212 docker.go:542] Images already preloaded, skipping extraction
	I0728 16:02:34.324839   31212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 16:02:34.354027   31212 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 16:02:34.354046   31212 cache_images.go:84] Images are preloaded, skipping loading
	I0728 16:02:34.354131   31212 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 16:02:34.426471   31212 cni.go:95] Creating CNI manager for ""
	I0728 16:02:34.426483   31212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 16:02:34.426497   31212 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0728 16:02:34.426516   31212 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220728160133-12923 NodeName:newest-cni-20220728160133-12923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 16:02:34.426649   31212 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220728160133-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 16:02:34.426743   31212 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220728160133-12923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220728160133-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 16:02:34.426805   31212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 16:02:34.433968   31212 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 16:02:34.434013   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 16:02:34.440954   31212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0728 16:02:34.453330   31212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 16:02:34.465174   31212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0728 16:02:34.477464   31212 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0728 16:02:34.481122   31212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 16:02:34.490283   31212 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923 for IP: 192.168.67.2
	I0728 16:02:34.490387   31212 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 16:02:34.490437   31212 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 16:02:34.490514   31212 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/client.key
	I0728 16:02:34.490573   31212 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/apiserver.key.c7fa3a9e
	I0728 16:02:34.490619   31212 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/proxy-client.key
	I0728 16:02:34.490812   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 16:02:34.490848   31212 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 16:02:34.490863   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 16:02:34.490893   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 16:02:34.490922   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 16:02:34.490949   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 16:02:34.491006   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 16:02:34.491520   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 16:02:34.507860   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0728 16:02:34.524173   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 16:02:34.540334   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 16:02:34.556938   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 16:02:34.573639   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 16:02:34.590137   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 16:02:34.607166   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 16:02:34.642640   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 16:02:34.659491   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 16:02:34.676033   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 16:02:34.693030   31212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 16:02:34.705522   31212 ssh_runner.go:195] Run: openssl version
	I0728 16:02:34.710856   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 16:02:34.718744   31212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 16:02:34.722654   31212 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 16:02:34.722702   31212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 16:02:34.728027   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 16:02:34.735333   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 16:02:34.743694   31212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 16:02:34.747931   31212 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 16:02:34.747982   31212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 16:02:34.753590   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 16:02:34.763104   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 16:02:34.770837   31212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 16:02:34.775515   31212 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 16:02:34.775562   31212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 16:02:34.780899   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 16:02:34.789548   31212 kubeadm.go:395] StartCluster: {Name:newest-cni-20220728160133-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220728160133-12923 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 16:02:34.789678   31212 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 16:02:34.820043   31212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 16:02:34.828195   31212 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 16:02:34.828209   31212 kubeadm.go:626] restartCluster start
	I0728 16:02:34.828259   31212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 16:02:34.835176   31212 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:34.835231   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:34.899691   31212 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220728160133-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 16:02:34.899864   31212 kubeconfig.go:127] "newest-cni-20220728160133-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 16:02:34.900174   31212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:02:34.901373   31212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 16:02:34.908787   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:34.908845   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:34.917197   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.119337   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.119544   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.129792   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.319312   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.319477   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.331080   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.518938   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.519039   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.528435   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.719350   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.719616   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.730406   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.919401   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.919549   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.930377   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.119438   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.119541   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.130056   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.319337   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.319525   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.329785   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.519379   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.519516   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.529835   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.719302   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.719398   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.730179   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.917372   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.917496   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.928519   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.119347   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.119507   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.129922   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.319301   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.319531   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.330062   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.519308   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.519551   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.529927   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.717327   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.717423   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.726044   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.919322   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.919438   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.929738   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.929748   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.929795   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.937878   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.937891   31212 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 16:02:37.937898   31212 kubeadm.go:1092] stopping kube-system containers ...
	I0728 16:02:37.937949   31212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 16:02:37.968534   31212 docker.go:443] Stopping containers: [df67d963fd1a e9afc45d6898 530b6eb6d7c7 97305fddcc4a 48abc99a84e2 bfe426170ea1 89e2d6baa776 c9e14b5f6cc9 ba52b06cc532 5e4060df4054 0ffd2a96459e 6795e4facb30 f41fe719b176 857d35ea3c0e 09e8e2df7716 75a84df66ff9 9624f7dfb492]
	I0728 16:02:37.968612   31212 ssh_runner.go:195] Run: docker stop df67d963fd1a e9afc45d6898 530b6eb6d7c7 97305fddcc4a 48abc99a84e2 bfe426170ea1 89e2d6baa776 c9e14b5f6cc9 ba52b06cc532 5e4060df4054 0ffd2a96459e 6795e4facb30 f41fe719b176 857d35ea3c0e 09e8e2df7716 75a84df66ff9 9624f7dfb492
	I0728 16:02:37.997807   31212 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 16:02:38.008050   31212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 16:02:38.015217   31212 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 28 23:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 28 23:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul 28 23:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 28 23:01 /etc/kubernetes/scheduler.conf
	
	I0728 16:02:38.015267   31212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 16:02:38.022571   31212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 16:02:38.029680   31212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 16:02:38.036703   31212 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:38.036746   31212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 16:02:38.043830   31212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 16:02:38.050885   31212 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:38.050935   31212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 16:02:38.058048   31212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 16:02:38.065404   31212 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 16:02:38.065414   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:38.111280   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:38.799836   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:38.977237   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:39.023772   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:39.067660   31212 api_server.go:51] waiting for apiserver process to appear ...
	I0728 16:02:39.067715   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:02:39.597308   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:02:40.096654   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:02:40.109739   31212 api_server.go:71] duration metric: took 1.042098966s to wait for apiserver process to appear ...
	I0728 16:02:40.109760   31212 api_server.go:87] waiting for apiserver healthz status ...
	I0728 16:02:40.109772   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:40.110868   31212 api_server.go:256] stopped: https://127.0.0.1:60085/healthz: Get "https://127.0.0.1:60085/healthz": EOF
	I0728 16:02:40.612933   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:43.301630   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 16:02:43.301646   31212 api_server.go:102] status: https://127.0.0.1:60085/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 16:02:43.611041   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:43.617933   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 16:02:43.617948   31212 api_server.go:102] status: https://127.0.0.1:60085/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 16:02:44.112038   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:44.118015   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 16:02:44.118027   31212 api_server.go:102] status: https://127.0.0.1:60085/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 16:02:44.610927   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:44.635752   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 200:
	ok
	I0728 16:02:44.642468   31212 api_server.go:140] control plane version: v1.24.3
	I0728 16:02:44.642485   31212 api_server.go:130] duration metric: took 4.532795077s to wait for apiserver health ...
	I0728 16:02:44.642492   31212 cni.go:95] Creating CNI manager for ""
	I0728 16:02:44.642496   31212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 16:02:44.642514   31212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 16:02:44.649448   31212 system_pods.go:59] 8 kube-system pods found
	I0728 16:02:44.649469   31212 system_pods.go:61] "coredns-6d4b75cb6d-prc72" [d43bde09-e312-4bea-952e-daf9ee264c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0728 16:02:44.649476   31212 system_pods.go:61] "etcd-newest-cni-20220728160133-12923" [a6bb18a1-1ae7-40d9-a37c-2e5648f96e42] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 16:02:44.649481   31212 system_pods.go:61] "kube-apiserver-newest-cni-20220728160133-12923" [b6e1fc8f-718f-493d-b415-4216550c6c10] Running
	I0728 16:02:44.649487   31212 system_pods.go:61] "kube-controller-manager-newest-cni-20220728160133-12923" [4b232ec1-a4c1-4396-a592-cd9890a34eeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 16:02:44.649494   31212 system_pods.go:61] "kube-proxy-7jx99" [e05eddca-b40e-419b-b509-2a5b2523c7da] Running
	I0728 16:02:44.649498   31212 system_pods.go:61] "kube-scheduler-newest-cni-20220728160133-12923" [cce41ddb-daea-4fe7-8b7c-609cd384981a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 16:02:44.649504   31212 system_pods.go:61] "metrics-server-5c6f97fb75-h4qvh" [43e62268-70c1-4a6e-9411-0ab4fa1c30f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 16:02:44.649508   31212 system_pods.go:61] "storage-provisioner" [2dc9b609-a74c-428c-a22d-c26db32aacf1] Running
	I0728 16:02:44.649512   31212 system_pods.go:74] duration metric: took 6.9952ms to wait for pod list to return data ...
	I0728 16:02:44.649518   31212 node_conditions.go:102] verifying NodePressure condition ...
	I0728 16:02:44.652283   31212 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 16:02:44.652297   31212 node_conditions.go:123] node cpu capacity is 6
	I0728 16:02:44.652305   31212 node_conditions.go:105] duration metric: took 2.784314ms to run NodePressure ...
	I0728 16:02:44.652318   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:44.782768   31212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 16:02:44.791228   31212 ops.go:34] apiserver oom_adj: -16
	I0728 16:02:44.791238   31212 kubeadm.go:630] restartCluster took 9.963191519s
	I0728 16:02:44.791245   31212 kubeadm.go:397] StartCluster complete in 10.001871582s
	I0728 16:02:44.791258   31212 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:02:44.791330   31212 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 16:02:44.791949   31212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:02:44.795248   31212 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220728160133-12923" rescaled to 1
	I0728 16:02:44.795284   31212 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 16:02:44.795309   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 16:02:44.817333   31212 out.go:177] * Verifying Kubernetes components...
	I0728 16:02:44.795312   31212 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0728 16:02:44.795464   31212 config.go:178] Loaded profile config "newest-cni-20220728160133-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 16:02:44.873731   31212 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0728 16:02:44.875008   31212 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220728160133-12923"
	I0728 16:02:44.875021   31212 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220728160133-12923"
	I0728 16:02:44.875027   31212 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220728160133-12923"
	I0728 16:02:44.875027   31212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 16:02:44.875033   31212 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220728160133-12923"
	W0728 16:02:44.875037   31212 addons.go:162] addon storage-provisioner should already be in state true
	I0728 16:02:44.875037   31212 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220728160133-12923"
	W0728 16:02:44.875057   31212 addons.go:162] addon metrics-server should already be in state true
	I0728 16:02:44.875061   31212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220728160133-12923"
	I0728 16:02:44.875049   31212 addons.go:65] Setting dashboard=true in profile "newest-cni-20220728160133-12923"
	I0728 16:02:44.875101   31212 host.go:66] Checking if "newest-cni-20220728160133-12923" exists ...
	I0728 16:02:44.875122   31212 host.go:66] Checking if "newest-cni-20220728160133-12923" exists ...
	I0728 16:02:44.875135   31212 addons.go:153] Setting addon dashboard=true in "newest-cni-20220728160133-12923"
	W0728 16:02:44.875145   31212 addons.go:162] addon dashboard should already be in state true
	I0728 16:02:44.875186   31212 host.go:66] Checking if "newest-cni-20220728160133-12923" exists ...
	I0728 16:02:44.875380   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:44.875540   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:44.876287   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:44.877471   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:44.891602   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:44.978222   31212 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220728160133-12923"
	W0728 16:02:45.038151   31212 addons.go:162] addon default-storageclass should already be in state true
	I0728 16:02:44.995981   31212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 16:02:45.017281   31212 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0728 16:02:45.038109   31212 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0728 16:02:45.038184   31212 host.go:66] Checking if "newest-cni-20220728160133-12923" exists ...
	I0728 16:02:45.059127   31212 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 16:02:45.117439   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 16:02:45.059503   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:45.080408   31212 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0728 16:02:45.114145   31212 api_server.go:51] waiting for apiserver process to appear ...
	I0728 16:02:45.117604   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:45.155111   31212 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0728 16:02:45.155127   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0728 16:02:45.155280   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:45.155287   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:02:45.176338   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0728 16:02:45.176362   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0728 16:02:45.176483   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:45.190146   31212 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 16:02:45.190172   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 16:02:45.190332   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:45.193858   31212 api_server.go:71] duration metric: took 398.53275ms to wait for apiserver process to appear ...
	I0728 16:02:45.193933   31212 api_server.go:87] waiting for apiserver healthz status ...
	I0728 16:02:45.193959   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:45.203508   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 200:
	ok
	I0728 16:02:45.205814   31212 api_server.go:140] control plane version: v1.24.3
	I0728 16:02:45.205868   31212 api_server.go:130] duration metric: took 11.908253ms to wait for apiserver health ...
	I0728 16:02:45.205895   31212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 16:02:45.215719   31212 system_pods.go:59] 8 kube-system pods found
	I0728 16:02:45.215742   31212 system_pods.go:61] "coredns-6d4b75cb6d-prc72" [d43bde09-e312-4bea-952e-daf9ee264c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0728 16:02:45.215770   31212 system_pods.go:61] "etcd-newest-cni-20220728160133-12923" [a6bb18a1-1ae7-40d9-a37c-2e5648f96e42] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 16:02:45.215782   31212 system_pods.go:61] "kube-apiserver-newest-cni-20220728160133-12923" [b6e1fc8f-718f-493d-b415-4216550c6c10] Running
	I0728 16:02:45.215805   31212 system_pods.go:61] "kube-controller-manager-newest-cni-20220728160133-12923" [4b232ec1-a4c1-4396-a592-cd9890a34eeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 16:02:45.215826   31212 system_pods.go:61] "kube-proxy-7jx99" [e05eddca-b40e-419b-b509-2a5b2523c7da] Running
	I0728 16:02:45.215846   31212 system_pods.go:61] "kube-scheduler-newest-cni-20220728160133-12923" [cce41ddb-daea-4fe7-8b7c-609cd384981a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 16:02:45.215870   31212 system_pods.go:61] "metrics-server-5c6f97fb75-h4qvh" [43e62268-70c1-4a6e-9411-0ab4fa1c30f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 16:02:45.215878   31212 system_pods.go:61] "storage-provisioner" [2dc9b609-a74c-428c-a22d-c26db32aacf1] Running
	I0728 16:02:45.215884   31212 system_pods.go:74] duration metric: took 9.974897ms to wait for pod list to return data ...
	I0728 16:02:45.215894   31212 default_sa.go:34] waiting for default service account to be created ...
	I0728 16:02:45.220763   31212 default_sa.go:45] found service account: "default"
	I0728 16:02:45.220781   31212 default_sa.go:55] duration metric: took 4.880297ms for default service account to be created ...
	I0728 16:02:45.220800   31212 kubeadm.go:572] duration metric: took 425.506612ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0728 16:02:45.220825   31212 node_conditions.go:102] verifying NodePressure condition ...
	I0728 16:02:45.225563   31212 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 16:02:45.225578   31212 node_conditions.go:123] node cpu capacity is 6
	I0728 16:02:45.225586   31212 node_conditions.go:105] duration metric: took 4.7554ms to run NodePressure ...
	I0728 16:02:45.225596   31212 start.go:216] waiting for startup goroutines ...
	I0728 16:02:45.248531   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:45.278823   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:45.279030   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:45.289678   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:45.403448   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0728 16:02:45.403464   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0728 16:02:45.406521   31212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 16:02:45.408574   31212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0728 16:02:45.408585   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0728 16:02:45.416041   31212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 16:02:45.497595   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0728 16:02:45.497609   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0728 16:02:45.498283   31212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0728 16:02:45.498299   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0728 16:02:45.595570   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0728 16:02:45.595587   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0728 16:02:45.600458   31212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 16:02:45.600472   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0728 16:02:45.681725   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0728 16:02:45.681760   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0728 16:02:45.706741   31212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 16:02:45.782502   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0728 16:02:45.782522   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0728 16:02:45.808701   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0728 16:02:45.808714   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0728 16:02:45.895440   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0728 16:02:45.895454   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0728 16:02:45.920964   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0728 16:02:45.920977   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0728 16:02:46.000259   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 16:02:46.000277   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0728 16:02:46.023418   31212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 16:02:46.505026   31212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.098500344s)
	I0728 16:02:46.519148   31212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.103104144s)
	I0728 16:02:46.594877   31212 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220728160133-12923"
	I0728 16:02:46.690101   31212 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0728 16:02:46.748423   31212 addons.go:414] enableAddons completed in 1.953149328s
	I0728 16:02:46.778888   31212 start.go:506] kubectl: 1.24.2, cluster: 1.24.3 (minor skew: 0)
	I0728 16:02:46.800597   31212 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220728160133-12923" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 22:43:51 UTC, end at Thu 2022-07-28 23:10:48 UTC. --
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[130]: time="2022-07-28T22:43:53.942690810Z" level=info msg="Processing signal 'terminated'"
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[130]: time="2022-07-28T22:43:53.943578596Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[130]: time="2022-07-28T22:43:53.944089211Z" level=info msg="Daemon shutdown complete"
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[130]: time="2022-07-28T22:43:53.944161741Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 systemd[1]: docker.service: Succeeded.
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 systemd[1]: Stopped Docker Application Container Engine.
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 systemd[1]: Starting Docker Application Container Engine...
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.996727993Z" level=info msg="Starting up"
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.998837785Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.998874628Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.998894523Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 28 22:43:53 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.998901889Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.999936587Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.999985502Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:53.999998161Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.000004378Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.003470166Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.008187672Z" level=info msg="Loading containers: start."
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.081875363Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.110811702Z" level=info msg="Loading containers: done."
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.118880813Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.118939961Z" level=info msg="Daemon has completed initialization"
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 systemd[1]: Started Docker Application Container Engine.
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.140764725Z" level=info msg="API listen on [::]:2376"
	Jul 28 22:43:54 old-k8s-version-20220728153807-12923 dockerd[425]: time="2022-07-28T22:43:54.143233308Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2022-07-28T23:10:50Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:10:50 up  1:31,  0 users,  load average: 0.36, 0.44, 0.73
	Linux old-k8s-version-20220728153807-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 22:43:51 UTC, end at Thu 2022-07-28 23:10:50 UTC. --
	Jul 28 23:10:48 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 28 23:10:49 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Jul 28 23:10:49 old-k8s-version-20220728153807-12923 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 28 23:10:49 old-k8s-version-20220728153807-12923 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 28 23:10:49 old-k8s-version-20220728153807-12923 kubelet[34190]: I0728 23:10:49.486739   34190 server.go:410] Version: v1.16.0
	Jul 28 23:10:49 old-k8s-version-20220728153807-12923 kubelet[34190]: I0728 23:10:49.487005   34190 plugins.go:100] No cloud provider specified.
	Jul 28 23:10:49 old-k8s-version-20220728153807-12923 kubelet[34190]: I0728 23:10:49.487015   34190 server.go:773] Client rotation is on, will bootstrap in background
	Jul 28 23:10:49 old-k8s-version-20220728153807-12923 kubelet[34190]: I0728 23:10:49.489350   34190 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 28 23:10:49 old-k8s-version-20220728153807-12923 kubelet[34190]: W0728 23:10:49.490227   34190 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 28 23:10:49 old-k8s-version-20220728153807-12923 kubelet[34190]: W0728 23:10:49.490289   34190 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 28 23:10:49 old-k8s-version-20220728153807-12923 kubelet[34190]: F0728 23:10:49.490372   34190 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 28 23:10:49 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 28 23:10:49 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 28 23:10:50 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1669.
	Jul 28 23:10:50 old-k8s-version-20220728153807-12923 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 28 23:10:50 old-k8s-version-20220728153807-12923 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 28 23:10:50 old-k8s-version-20220728153807-12923 kubelet[34203]: I0728 23:10:50.240733   34203 server.go:410] Version: v1.16.0
	Jul 28 23:10:50 old-k8s-version-20220728153807-12923 kubelet[34203]: I0728 23:10:50.241071   34203 plugins.go:100] No cloud provider specified.
	Jul 28 23:10:50 old-k8s-version-20220728153807-12923 kubelet[34203]: I0728 23:10:50.241105   34203 server.go:773] Client rotation is on, will bootstrap in background
	Jul 28 23:10:50 old-k8s-version-20220728153807-12923 kubelet[34203]: I0728 23:10:50.243067   34203 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 28 23:10:50 old-k8s-version-20220728153807-12923 kubelet[34203]: W0728 23:10:50.244278   34203 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 28 23:10:50 old-k8s-version-20220728153807-12923 kubelet[34203]: W0728 23:10:50.244318   34203 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 28 23:10:50 old-k8s-version-20220728153807-12923 kubelet[34203]: F0728 23:10:50.244345   34203 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 28 23:10:50 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 28 23:10:50 old-k8s-version-20220728153807-12923 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 16:10:50.411219   32015 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 2 (415.144324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-20220728153807-12923" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (48.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220728160133-12923 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923
E0728 16:02:51.959409   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923: exit status 2 (16.078821617s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923
E0728 16:03:04.762094   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923: exit status 2 (16.078871149s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220728160133-12923 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-darwin-amd64 unpause -p newest-cni-20220728160133-12923 --alsologtostderr -v=1: (1.032510855s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220728160133-12923
helpers_test.go:235: (dbg) docker inspect newest-cni-20220728160133-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dcabc2c4ee37df52f3c5b5e3f8d63bfbb1ae84be73e5a5a6355b3d22891b8d69",
	        "Created": "2022-07-28T23:01:40.749675454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 314275,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T23:02:30.872182084Z",
	            "FinishedAt": "2022-07-28T23:02:28.845184509Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/dcabc2c4ee37df52f3c5b5e3f8d63bfbb1ae84be73e5a5a6355b3d22891b8d69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dcabc2c4ee37df52f3c5b5e3f8d63bfbb1ae84be73e5a5a6355b3d22891b8d69/hostname",
	        "HostsPath": "/var/lib/docker/containers/dcabc2c4ee37df52f3c5b5e3f8d63bfbb1ae84be73e5a5a6355b3d22891b8d69/hosts",
	        "LogPath": "/var/lib/docker/containers/dcabc2c4ee37df52f3c5b5e3f8d63bfbb1ae84be73e5a5a6355b3d22891b8d69/dcabc2c4ee37df52f3c5b5e3f8d63bfbb1ae84be73e5a5a6355b3d22891b8d69-json.log",
	        "Name": "/newest-cni-20220728160133-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-20220728160133-12923:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220728160133-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d2a0d5f4912188452a307c08e6218ac8dcaf03602008b94e8b8286defe40f9ff-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2a0d5f4912188452a307c08e6218ac8dcaf03602008b94e8b8286defe40f9ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2a0d5f4912188452a307c08e6218ac8dcaf03602008b94e8b8286defe40f9ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2a0d5f4912188452a307c08e6218ac8dcaf03602008b94e8b8286defe40f9ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220728160133-12923",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220728160133-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220728160133-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220728160133-12923",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220728160133-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "751e7a20ba9581c1ddb1c3632ec1be0600dd665b3d47531e2685839174999367",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60086"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60083"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60084"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60085"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/751e7a20ba95",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220728160133-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dcabc2c4ee37",
	                        "newest-cni-20220728160133-12923"
	                    ],
	                    "NetworkID": "19d83e1f6d4f1918b5783e54f41e13d36ffa4cd02bc94de9d9a4d43b1c6abd02",
	                    "EndpointID": "b628b301c35e794d38d387dd97dcb878d4893624570dcec09e169129fd232420",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220728160133-12923 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220728160133-12923 logs -n 25: (3.802705634s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220728155419-12923      | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | disable-driver-mounts-20220728155419-12923                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:00 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:00 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:01 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:01 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:01 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220728160133-12923 --memory=2200           | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:02 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220728160133-12923 --memory=2200           | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:03 PDT | 28 Jul 22 16:03 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 16:02:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 16:02:29.603987   31212 out.go:296] Setting OutFile to fd 1 ...
	I0728 16:02:29.604169   31212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 16:02:29.604175   31212 out.go:309] Setting ErrFile to fd 2...
	I0728 16:02:29.604179   31212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 16:02:29.604278   31212 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 16:02:29.604724   31212 out.go:303] Setting JSON to false
	I0728 16:02:29.620611   31212 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10391,"bootTime":1659038958,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 16:02:29.620715   31212 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 16:02:29.642836   31212 out.go:177] * [newest-cni-20220728160133-12923] minikube v1.26.0 on Darwin 12.5
	I0728 16:02:29.685952   31212 notify.go:193] Checking for updates...
	I0728 16:02:29.707624   31212 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 16:02:29.729048   31212 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 16:02:29.751059   31212 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 16:02:29.772938   31212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 16:02:29.794784   31212 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 16:02:29.817184   31212 config.go:178] Loaded profile config "newest-cni-20220728160133-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 16:02:29.817825   31212 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 16:02:29.884824   31212 docker.go:137] docker version: linux-20.10.17
	I0728 16:02:29.884958   31212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 16:02:30.016461   31212 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 23:02:29.951122434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 16:02:30.058952   31212 out.go:177] * Using the docker driver based on existing profile
	I0728 16:02:30.081202   31212 start.go:284] selected driver: docker
	I0728 16:02:30.081227   31212 start.go:808] validating driver "docker" against &{Name:newest-cni-20220728160133-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220728160133-12923 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 16:02:30.081399   31212 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 16:02:30.084734   31212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 16:02:30.215852   31212 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 23:02:30.151469441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 16:02:30.216047   31212 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0728 16:02:30.216068   31212 cni.go:95] Creating CNI manager for ""
	I0728 16:02:30.216080   31212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 16:02:30.216095   31212 start_flags.go:310] config:
	{Name:newest-cni-20220728160133-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220728160133-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 16:02:30.258426   31212 out.go:177] * Starting control plane node newest-cni-20220728160133-12923 in cluster newest-cni-20220728160133-12923
	I0728 16:02:30.279453   31212 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 16:02:30.300285   31212 out.go:177] * Pulling base image ...
	I0728 16:02:30.342972   31212 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 16:02:30.343034   31212 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 16:02:30.343067   31212 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 16:02:30.343091   31212 cache.go:57] Caching tarball of preloaded images
	I0728 16:02:30.343264   31212 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 16:02:30.343286   31212 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 16:02:30.344223   31212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/config.json ...
	I0728 16:02:30.408173   31212 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 16:02:30.408192   31212 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 16:02:30.408203   31212 cache.go:208] Successfully downloaded all kic artifacts
	I0728 16:02:30.408274   31212 start.go:370] acquiring machines lock for newest-cni-20220728160133-12923: {Name:mkde4349139da57471ff6865c409a88cc56837e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 16:02:30.408353   31212 start.go:374] acquired machines lock for "newest-cni-20220728160133-12923" in 60.272µs
	I0728 16:02:30.408375   31212 start.go:95] Skipping create...Using existing machine configuration
	I0728 16:02:30.408384   31212 fix.go:55] fixHost starting: 
	I0728 16:02:30.408637   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:30.472018   31212 fix.go:103] recreateIfNeeded on newest-cni-20220728160133-12923: state=Stopped err=<nil>
	W0728 16:02:30.472048   31212 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 16:02:30.515834   31212 out.go:177] * Restarting existing docker container for "newest-cni-20220728160133-12923" ...
	I0728 16:02:30.537952   31212 cli_runner.go:164] Run: docker start newest-cni-20220728160133-12923
	I0728 16:02:30.869264   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:30.933448   31212 kic.go:415] container "newest-cni-20220728160133-12923" state is running.
	I0728 16:02:30.934009   31212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220728160133-12923
	I0728 16:02:31.002987   31212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/config.json ...
	I0728 16:02:31.003389   31212 machine.go:88] provisioning docker machine ...
	I0728 16:02:31.003411   31212 ubuntu.go:169] provisioning hostname "newest-cni-20220728160133-12923"
	I0728 16:02:31.003476   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:31.070544   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:31.070746   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:31.070761   31212 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220728160133-12923 && echo "newest-cni-20220728160133-12923" | sudo tee /etc/hostname
	I0728 16:02:31.201548   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220728160133-12923
	
	I0728 16:02:31.201634   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:31.266510   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:31.266676   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:31.266694   31212 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220728160133-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220728160133-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220728160133-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 16:02:31.385812   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 16:02:31.385833   31212 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 16:02:31.385867   31212 ubuntu.go:177] setting up certificates
	I0728 16:02:31.385874   31212 provision.go:83] configureAuth start
	I0728 16:02:31.385943   31212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220728160133-12923
	I0728 16:02:31.451614   31212 provision.go:138] copyHostCerts
	I0728 16:02:31.451705   31212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 16:02:31.451715   31212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 16:02:31.451804   31212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 16:02:31.452003   31212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 16:02:31.452013   31212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 16:02:31.452087   31212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 16:02:31.452228   31212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 16:02:31.452233   31212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 16:02:31.452288   31212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 16:02:31.452406   31212 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220728160133-12923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220728160133-12923]
	I0728 16:02:31.570157   31212 provision.go:172] copyRemoteCerts
	I0728 16:02:31.570217   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 16:02:31.570260   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:31.633580   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:31.718874   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 16:02:31.736262   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0728 16:02:31.753572   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 16:02:31.770074   31212 provision.go:86] duration metric: configureAuth took 384.191515ms
	I0728 16:02:31.770089   31212 ubuntu.go:193] setting minikube options for container-runtime
	I0728 16:02:31.770242   31212 config.go:178] Loaded profile config "newest-cni-20220728160133-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 16:02:31.770295   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:31.834721   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:31.834895   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:31.834906   31212 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 16:02:31.957197   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 16:02:31.957210   31212 ubuntu.go:71] root file system type: overlay
	I0728 16:02:31.957365   31212 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 16:02:31.957437   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.021693   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:32.021854   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:32.021903   31212 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 16:02:32.149282   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 16:02:32.149364   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.213109   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:32.213274   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:32.213289   31212 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 16:02:32.338244   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 16:02:32.338264   31212 machine.go:91] provisioned docker machine in 1.334887211s
	I0728 16:02:32.338275   31212 start.go:307] post-start starting for "newest-cni-20220728160133-12923" (driver="docker")
	I0728 16:02:32.338281   31212 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 16:02:32.338339   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 16:02:32.338391   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.401686   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:32.486912   31212 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 16:02:32.490645   31212 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 16:02:32.490662   31212 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 16:02:32.490677   31212 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 16:02:32.490684   31212 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 16:02:32.490696   31212 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 16:02:32.490812   31212 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 16:02:32.490953   31212 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 16:02:32.491096   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 16:02:32.498304   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 16:02:32.516144   31212 start.go:310] post-start completed in 177.863755ms
	I0728 16:02:32.516223   31212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 16:02:32.516271   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.580139   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:32.665740   31212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 16:02:32.670469   31212 fix.go:57] fixHost completed within 2.262122262s
	I0728 16:02:32.670481   31212 start.go:82] releasing machines lock for "newest-cni-20220728160133-12923", held for 2.262159415s
	I0728 16:02:32.670550   31212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220728160133-12923
	I0728 16:02:32.734665   31212 ssh_runner.go:195] Run: systemctl --version
	I0728 16:02:32.734676   31212 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 16:02:32.734725   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.734770   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.803568   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:32.803742   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:33.085349   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 16:02:33.093438   31212 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0728 16:02:33.105957   31212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 16:02:33.176841   31212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0728 16:02:33.256944   31212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 16:02:33.267572   31212 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 16:02:33.267647   31212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 16:02:33.277112   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 16:02:33.289609   31212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 16:02:33.356813   31212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 16:02:33.432618   31212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 16:02:33.497815   31212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 16:02:33.738564   31212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 16:02:33.802407   31212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 16:02:33.879275   31212 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 16:02:33.888513   31212 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 16:02:33.888576   31212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 16:02:33.892665   31212 start.go:471] Will wait 60s for crictl version
	I0728 16:02:33.892723   31212 ssh_runner.go:195] Run: sudo crictl version
	I0728 16:02:33.921786   31212 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 16:02:33.921853   31212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 16:02:33.960795   31212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 16:02:34.049447   31212 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 16:02:34.049621   31212 cli_runner.go:164] Run: docker exec -t newest-cni-20220728160133-12923 dig +short host.docker.internal
	I0728 16:02:34.171213   31212 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 16:02:34.171311   31212 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 16:02:34.175390   31212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 16:02:34.184506   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:34.270359   31212 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0728 16:02:34.292818   31212 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 16:02:34.292978   31212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 16:02:34.324733   31212 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 16:02:34.324750   31212 docker.go:542] Images already preloaded, skipping extraction
	I0728 16:02:34.324839   31212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 16:02:34.354027   31212 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 16:02:34.354046   31212 cache_images.go:84] Images are preloaded, skipping loading
	I0728 16:02:34.354131   31212 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 16:02:34.426471   31212 cni.go:95] Creating CNI manager for ""
	I0728 16:02:34.426483   31212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 16:02:34.426497   31212 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0728 16:02:34.426516   31212 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220728160133-12923 NodeName:newest-cni-20220728160133-12923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 16:02:34.426649   31212 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220728160133-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 16:02:34.426743   31212 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220728160133-12923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220728160133-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 16:02:34.426805   31212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 16:02:34.433968   31212 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 16:02:34.434013   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 16:02:34.440954   31212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0728 16:02:34.453330   31212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 16:02:34.465174   31212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0728 16:02:34.477464   31212 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0728 16:02:34.481122   31212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 16:02:34.490283   31212 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923 for IP: 192.168.67.2
	I0728 16:02:34.490387   31212 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 16:02:34.490437   31212 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 16:02:34.490514   31212 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/client.key
	I0728 16:02:34.490573   31212 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/apiserver.key.c7fa3a9e
	I0728 16:02:34.490619   31212 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/proxy-client.key
	I0728 16:02:34.490812   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 16:02:34.490848   31212 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 16:02:34.490863   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 16:02:34.490893   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 16:02:34.490922   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 16:02:34.490949   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 16:02:34.491006   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 16:02:34.491520   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 16:02:34.507860   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0728 16:02:34.524173   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 16:02:34.540334   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 16:02:34.556938   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 16:02:34.573639   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 16:02:34.590137   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 16:02:34.607166   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 16:02:34.642640   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 16:02:34.659491   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 16:02:34.676033   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 16:02:34.693030   31212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 16:02:34.705522   31212 ssh_runner.go:195] Run: openssl version
	I0728 16:02:34.710856   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 16:02:34.718744   31212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 16:02:34.722654   31212 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 16:02:34.722702   31212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 16:02:34.728027   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 16:02:34.735333   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 16:02:34.743694   31212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 16:02:34.747931   31212 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 16:02:34.747982   31212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 16:02:34.753590   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 16:02:34.763104   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 16:02:34.770837   31212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 16:02:34.775515   31212 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 16:02:34.775562   31212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 16:02:34.780899   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 16:02:34.789548   31212 kubeadm.go:395] StartCluster: {Name:newest-cni-20220728160133-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220728160133-12923 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 16:02:34.789678   31212 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 16:02:34.820043   31212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 16:02:34.828195   31212 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 16:02:34.828209   31212 kubeadm.go:626] restartCluster start
	I0728 16:02:34.828259   31212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 16:02:34.835176   31212 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:34.835231   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:34.899691   31212 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220728160133-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 16:02:34.899864   31212 kubeconfig.go:127] "newest-cni-20220728160133-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 16:02:34.900174   31212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:02:34.901373   31212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 16:02:34.908787   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:34.908845   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:34.917197   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.119337   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.119544   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.129792   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.319312   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.319477   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.331080   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.518938   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.519039   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.528435   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.719350   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.719616   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.730406   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.919401   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.919549   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.930377   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.119438   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.119541   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.130056   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.319337   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.319525   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.329785   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.519379   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.519516   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.529835   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.719302   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.719398   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.730179   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.917372   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.917496   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.928519   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.119347   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.119507   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.129922   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.319301   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.319531   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.330062   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.519308   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.519551   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.529927   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.717327   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.717423   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.726044   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.919322   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.919438   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.929738   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.929748   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.929795   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.937878   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.937891   31212 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 16:02:37.937898   31212 kubeadm.go:1092] stopping kube-system containers ...
	I0728 16:02:37.937949   31212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 16:02:37.968534   31212 docker.go:443] Stopping containers: [df67d963fd1a e9afc45d6898 530b6eb6d7c7 97305fddcc4a 48abc99a84e2 bfe426170ea1 89e2d6baa776 c9e14b5f6cc9 ba52b06cc532 5e4060df4054 0ffd2a96459e 6795e4facb30 f41fe719b176 857d35ea3c0e 09e8e2df7716 75a84df66ff9 9624f7dfb492]
	I0728 16:02:37.968612   31212 ssh_runner.go:195] Run: docker stop df67d963fd1a e9afc45d6898 530b6eb6d7c7 97305fddcc4a 48abc99a84e2 bfe426170ea1 89e2d6baa776 c9e14b5f6cc9 ba52b06cc532 5e4060df4054 0ffd2a96459e 6795e4facb30 f41fe719b176 857d35ea3c0e 09e8e2df7716 75a84df66ff9 9624f7dfb492
	I0728 16:02:37.997807   31212 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 16:02:38.008050   31212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 16:02:38.015217   31212 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 28 23:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 28 23:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul 28 23:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 28 23:01 /etc/kubernetes/scheduler.conf
	
	I0728 16:02:38.015267   31212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 16:02:38.022571   31212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 16:02:38.029680   31212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 16:02:38.036703   31212 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:38.036746   31212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 16:02:38.043830   31212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 16:02:38.050885   31212 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:38.050935   31212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 16:02:38.058048   31212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 16:02:38.065404   31212 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 16:02:38.065414   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:38.111280   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:38.799836   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:38.977237   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:39.023772   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:39.067660   31212 api_server.go:51] waiting for apiserver process to appear ...
	I0728 16:02:39.067715   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:02:39.597308   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:02:40.096654   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:02:40.109739   31212 api_server.go:71] duration metric: took 1.042098966s to wait for apiserver process to appear ...
	I0728 16:02:40.109760   31212 api_server.go:87] waiting for apiserver healthz status ...
	I0728 16:02:40.109772   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:40.110868   31212 api_server.go:256] stopped: https://127.0.0.1:60085/healthz: Get "https://127.0.0.1:60085/healthz": EOF
	I0728 16:02:40.612933   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:43.301630   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 16:02:43.301646   31212 api_server.go:102] status: https://127.0.0.1:60085/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 16:02:43.611041   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:43.617933   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 16:02:43.617948   31212 api_server.go:102] status: https://127.0.0.1:60085/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 16:02:44.112038   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:44.118015   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 16:02:44.118027   31212 api_server.go:102] status: https://127.0.0.1:60085/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 16:02:44.610927   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:44.635752   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 200:
	ok
	I0728 16:02:44.642468   31212 api_server.go:140] control plane version: v1.24.3
	I0728 16:02:44.642485   31212 api_server.go:130] duration metric: took 4.532795077s to wait for apiserver health ...
	I0728 16:02:44.642492   31212 cni.go:95] Creating CNI manager for ""
	I0728 16:02:44.642496   31212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 16:02:44.642514   31212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 16:02:44.649448   31212 system_pods.go:59] 8 kube-system pods found
	I0728 16:02:44.649469   31212 system_pods.go:61] "coredns-6d4b75cb6d-prc72" [d43bde09-e312-4bea-952e-daf9ee264c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0728 16:02:44.649476   31212 system_pods.go:61] "etcd-newest-cni-20220728160133-12923" [a6bb18a1-1ae7-40d9-a37c-2e5648f96e42] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 16:02:44.649481   31212 system_pods.go:61] "kube-apiserver-newest-cni-20220728160133-12923" [b6e1fc8f-718f-493d-b415-4216550c6c10] Running
	I0728 16:02:44.649487   31212 system_pods.go:61] "kube-controller-manager-newest-cni-20220728160133-12923" [4b232ec1-a4c1-4396-a592-cd9890a34eeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 16:02:44.649494   31212 system_pods.go:61] "kube-proxy-7jx99" [e05eddca-b40e-419b-b509-2a5b2523c7da] Running
	I0728 16:02:44.649498   31212 system_pods.go:61] "kube-scheduler-newest-cni-20220728160133-12923" [cce41ddb-daea-4fe7-8b7c-609cd384981a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 16:02:44.649504   31212 system_pods.go:61] "metrics-server-5c6f97fb75-h4qvh" [43e62268-70c1-4a6e-9411-0ab4fa1c30f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 16:02:44.649508   31212 system_pods.go:61] "storage-provisioner" [2dc9b609-a74c-428c-a22d-c26db32aacf1] Running
	I0728 16:02:44.649512   31212 system_pods.go:74] duration metric: took 6.9952ms to wait for pod list to return data ...
	I0728 16:02:44.649518   31212 node_conditions.go:102] verifying NodePressure condition ...
	I0728 16:02:44.652283   31212 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 16:02:44.652297   31212 node_conditions.go:123] node cpu capacity is 6
	I0728 16:02:44.652305   31212 node_conditions.go:105] duration metric: took 2.784314ms to run NodePressure ...
	I0728 16:02:44.652318   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:44.782768   31212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 16:02:44.791228   31212 ops.go:34] apiserver oom_adj: -16
	I0728 16:02:44.791238   31212 kubeadm.go:630] restartCluster took 9.963191519s
	I0728 16:02:44.791245   31212 kubeadm.go:397] StartCluster complete in 10.001871582s
	I0728 16:02:44.791258   31212 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:02:44.791330   31212 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 16:02:44.791949   31212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:02:44.795248   31212 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220728160133-12923" rescaled to 1
	I0728 16:02:44.795284   31212 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 16:02:44.795309   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 16:02:44.817333   31212 out.go:177] * Verifying Kubernetes components...
	I0728 16:02:44.795312   31212 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0728 16:02:44.795464   31212 config.go:178] Loaded profile config "newest-cni-20220728160133-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 16:02:44.873731   31212 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0728 16:02:44.875008   31212 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220728160133-12923"
	I0728 16:02:44.875021   31212 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220728160133-12923"
	I0728 16:02:44.875027   31212 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220728160133-12923"
	I0728 16:02:44.875027   31212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 16:02:44.875033   31212 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220728160133-12923"
	W0728 16:02:44.875037   31212 addons.go:162] addon storage-provisioner should already be in state true
	I0728 16:02:44.875037   31212 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220728160133-12923"
	W0728 16:02:44.875057   31212 addons.go:162] addon metrics-server should already be in state true
	I0728 16:02:44.875061   31212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220728160133-12923"
	I0728 16:02:44.875049   31212 addons.go:65] Setting dashboard=true in profile "newest-cni-20220728160133-12923"
	I0728 16:02:44.875101   31212 host.go:66] Checking if "newest-cni-20220728160133-12923" exists ...
	I0728 16:02:44.875122   31212 host.go:66] Checking if "newest-cni-20220728160133-12923" exists ...
	I0728 16:02:44.875135   31212 addons.go:153] Setting addon dashboard=true in "newest-cni-20220728160133-12923"
	W0728 16:02:44.875145   31212 addons.go:162] addon dashboard should already be in state true
	I0728 16:02:44.875186   31212 host.go:66] Checking if "newest-cni-20220728160133-12923" exists ...
	I0728 16:02:44.875380   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:44.875540   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:44.876287   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:44.877471   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:44.891602   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:44.978222   31212 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220728160133-12923"
	W0728 16:02:45.038151   31212 addons.go:162] addon default-storageclass should already be in state true
	I0728 16:02:44.995981   31212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 16:02:45.017281   31212 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0728 16:02:45.038109   31212 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0728 16:02:45.038184   31212 host.go:66] Checking if "newest-cni-20220728160133-12923" exists ...
	I0728 16:02:45.059127   31212 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 16:02:45.117439   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 16:02:45.059503   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:45.080408   31212 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0728 16:02:45.114145   31212 api_server.go:51] waiting for apiserver process to appear ...
	I0728 16:02:45.117604   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:45.155111   31212 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0728 16:02:45.155127   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0728 16:02:45.155280   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:45.155287   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:02:45.176338   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0728 16:02:45.176362   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0728 16:02:45.176483   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:45.190146   31212 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 16:02:45.190172   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 16:02:45.190332   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:45.193858   31212 api_server.go:71] duration metric: took 398.53275ms to wait for apiserver process to appear ...
	I0728 16:02:45.193933   31212 api_server.go:87] waiting for apiserver healthz status ...
	I0728 16:02:45.193959   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:45.203508   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 200:
	ok
	I0728 16:02:45.205814   31212 api_server.go:140] control plane version: v1.24.3
	I0728 16:02:45.205868   31212 api_server.go:130] duration metric: took 11.908253ms to wait for apiserver health ...
	I0728 16:02:45.205895   31212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 16:02:45.215719   31212 system_pods.go:59] 8 kube-system pods found
	I0728 16:02:45.215742   31212 system_pods.go:61] "coredns-6d4b75cb6d-prc72" [d43bde09-e312-4bea-952e-daf9ee264c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0728 16:02:45.215770   31212 system_pods.go:61] "etcd-newest-cni-20220728160133-12923" [a6bb18a1-1ae7-40d9-a37c-2e5648f96e42] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 16:02:45.215782   31212 system_pods.go:61] "kube-apiserver-newest-cni-20220728160133-12923" [b6e1fc8f-718f-493d-b415-4216550c6c10] Running
	I0728 16:02:45.215805   31212 system_pods.go:61] "kube-controller-manager-newest-cni-20220728160133-12923" [4b232ec1-a4c1-4396-a592-cd9890a34eeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 16:02:45.215826   31212 system_pods.go:61] "kube-proxy-7jx99" [e05eddca-b40e-419b-b509-2a5b2523c7da] Running
	I0728 16:02:45.215846   31212 system_pods.go:61] "kube-scheduler-newest-cni-20220728160133-12923" [cce41ddb-daea-4fe7-8b7c-609cd384981a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 16:02:45.215870   31212 system_pods.go:61] "metrics-server-5c6f97fb75-h4qvh" [43e62268-70c1-4a6e-9411-0ab4fa1c30f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 16:02:45.215878   31212 system_pods.go:61] "storage-provisioner" [2dc9b609-a74c-428c-a22d-c26db32aacf1] Running
	I0728 16:02:45.215884   31212 system_pods.go:74] duration metric: took 9.974897ms to wait for pod list to return data ...
	I0728 16:02:45.215894   31212 default_sa.go:34] waiting for default service account to be created ...
	I0728 16:02:45.220763   31212 default_sa.go:45] found service account: "default"
	I0728 16:02:45.220781   31212 default_sa.go:55] duration metric: took 4.880297ms for default service account to be created ...
	I0728 16:02:45.220800   31212 kubeadm.go:572] duration metric: took 425.506612ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0728 16:02:45.220825   31212 node_conditions.go:102] verifying NodePressure condition ...
	I0728 16:02:45.225563   31212 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 16:02:45.225578   31212 node_conditions.go:123] node cpu capacity is 6
	I0728 16:02:45.225586   31212 node_conditions.go:105] duration metric: took 4.7554ms to run NodePressure ...
	I0728 16:02:45.225596   31212 start.go:216] waiting for startup goroutines ...
	I0728 16:02:45.248531   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:45.278823   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:45.279030   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:45.289678   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:45.403448   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0728 16:02:45.403464   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0728 16:02:45.406521   31212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 16:02:45.408574   31212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0728 16:02:45.408585   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0728 16:02:45.416041   31212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 16:02:45.497595   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0728 16:02:45.497609   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0728 16:02:45.498283   31212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0728 16:02:45.498299   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0728 16:02:45.595570   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0728 16:02:45.595587   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0728 16:02:45.600458   31212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 16:02:45.600472   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0728 16:02:45.681725   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0728 16:02:45.681760   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0728 16:02:45.706741   31212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 16:02:45.782502   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0728 16:02:45.782522   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0728 16:02:45.808701   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0728 16:02:45.808714   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0728 16:02:45.895440   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0728 16:02:45.895454   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0728 16:02:45.920964   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0728 16:02:45.920977   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0728 16:02:46.000259   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 16:02:46.000277   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0728 16:02:46.023418   31212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 16:02:46.505026   31212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.098500344s)
	I0728 16:02:46.519148   31212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.103104144s)
	I0728 16:02:46.594877   31212 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220728160133-12923"
	I0728 16:02:46.690101   31212 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0728 16:02:46.748423   31212 addons.go:414] enableAddons completed in 1.953149328s
	I0728 16:02:46.778888   31212 start.go:506] kubectl: 1.24.2, cluster: 1.24.3 (minor skew: 0)
	I0728 16:02:46.800597   31212 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220728160133-12923" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 23:02:31 UTC, end at Thu 2022-07-28 23:03:23 UTC. --
	Jul 28 23:02:33 newest-cni-20220728160133-12923 systemd[1]: Starting Docker Application Container Engine...
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.578462628Z" level=info msg="Starting up"
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.580109544Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.580218856Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.580279653Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.580322327Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.581300560Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.581330360Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.581342547Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.581354588Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.584787587Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.589438646Z" level=info msg="Loading containers: start."
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.688344140Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.723463130Z" level=info msg="Loading containers: done."
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.732928806Z" level=info msg="Docker daemon" commit=a89b842 graphdriver(s)=overlay2 version=20.10.17
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.732991345Z" level=info msg="Daemon has completed initialization"
	Jul 28 23:02:33 newest-cni-20220728160133-12923 systemd[1]: Started Docker Application Container Engine.
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.753986251Z" level=info msg="API listen on [::]:2376"
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.759180262Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 28 23:02:46 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:46.095142741Z" level=info msg="ignoring event" container=1073b2237ee09691246c51622bdba357e813d744b05e6208b6c0000ac5b2df93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:02:46 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:46.308467432Z" level=info msg="ignoring event" container=eefc23ae5401939b583510607187252277a522b71242564f606aa8a49ea1f77b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:02:47 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:47.349773236Z" level=info msg="ignoring event" container=2d68668f2766d70399abce9eea04ef9d197aa943011547f0ca937566ca712c9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:02:47 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:47.351624598Z" level=info msg="ignoring event" container=83c1be755e5d07deb27c2d1db5b9e270ca0c68fd2dd75a91d471c7ed7db439a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:02:48 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:48.264703228Z" level=info msg="ignoring event" container=59d0f547cf6ac37318a1b09fbe3aa45dada60d858ae6a532da391be4d9e4899c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:02:48 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:48.301046738Z" level=info msg="ignoring event" container=3a1e8f28a26272ca7b40933b8f13de4c96a9a7d01759446ec572b83925b6b18f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	8ae7c37eff3f5       6e38f40d628db       39 seconds ago       Running             storage-provisioner       1                   5c92a36ddae3a
	56bcddf7d2fa0       2ae1ba6417cbc       40 seconds ago       Running             kube-proxy                1                   1f55de862e21b
	edc88662680f9       586c112956dfc       45 seconds ago       Running             kube-controller-manager   1                   010af3b0a8368
	3b9280ec90204       aebe758cef4cd       45 seconds ago       Running             etcd                      1                   51816182d04e5
	b788ccc753b1d       3a5aa3a515f5d       45 seconds ago       Running             kube-scheduler            1                   6d7fd19fe7c92
	33001086c552e       d521dd763e2e3       45 seconds ago       Running             kube-apiserver            1                   e026ae869b375
	530b6eb6d7c7a       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   97305fddcc4a5
	89e2d6baa776e       2ae1ba6417cbc       About a minute ago   Exited              kube-proxy                0                   ba52b06cc5320
	5e4060df40544       aebe758cef4cd       About a minute ago   Exited              etcd                      0                   0ffd2a96459ea
	6795e4facb303       586c112956dfc       About a minute ago   Exited              kube-controller-manager   0                   75a84df66ff97
	f41fe719b176d       3a5aa3a515f5d       About a minute ago   Exited              kube-scheduler            0                   09e8e2df77165
	857d35ea3c0ed       d521dd763e2e3       About a minute ago   Exited              kube-apiserver            0                   9624f7dfb492e
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220728160133-12923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220728160133-12923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
	                    minikube.k8s.io/name=newest-cni-20220728160133-12923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_28T16_02_00_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 23:01:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220728160133-12923
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 23:03:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Jul 2022 23:03:22 +0000   Thu, 28 Jul 2022 23:01:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Jul 2022 23:03:22 +0000   Thu, 28 Jul 2022 23:01:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Jul 2022 23:03:22 +0000   Thu, 28 Jul 2022 23:01:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Jul 2022 23:03:22 +0000   Thu, 28 Jul 2022 23:03:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    newest-cni-20220728160133-12923
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                0df2c05f-97b9-447d-ba86-6ab9bb1c9e96
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-prc72                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     70s
	  kube-system                 etcd-newest-cni-20220728160133-12923                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         84s
	  kube-system                 kube-apiserver-newest-cni-20220728160133-12923             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-controller-manager-newest-cni-20220728160133-12923    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-proxy-7jx99                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-scheduler-newest-cni-20220728160133-12923             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 metrics-server-5c6f97fb75-h4qvh                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-vkblv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-f9884                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 69s                kube-proxy       
	  Normal  Starting                 39s                kube-proxy       
	  Normal  NodeHasSufficientMemory  95s (x5 over 95s)  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x5 over 95s)  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x4 over 95s)  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasSufficientPID
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  84s                kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s                kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s                kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             84s                kubelet          Node newest-cni-20220728160133-12923 status is now: NodeNotReady
	  Normal  NodeReady                74s                kubelet          Node newest-cni-20220728160133-12923 status is now: NodeReady
	  Normal  RegisteredNode           71s                node-controller  Node newest-cni-20220728160133-12923 event: Registered Node newest-cni-20220728160133-12923 in Controller
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-20220728160133-12923 event: Registered Node newest-cni-20220728160133-12923 in Controller
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3s                 kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s                 kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s                 kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2s                 kubelet          Node newest-cni-20220728160133-12923 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2s                 kubelet          Node newest-cni-20220728160133-12923 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [3b9280ec9020] <==
	* {"level":"info","ts":"2022-07-28T23:02:40.211Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-07-28T23:02:40.212Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-28T23:02:40.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-07-28T23:02:40.212Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-07-28T23:02:40.212Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:02:40.212Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:02:40.213Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-28T23:02:40.213Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T23:02:40.213Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T23:02:40.213Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T23:02:40.213Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-28T23:02:41.649Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20220728160133-12923 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T23:02:41.649Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T23:02:41.649Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T23:02:41.649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T23:02:41.649Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T23:02:41.650Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-07-28T23:02:41.650Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [5e4060df4054] <==
	* {"level":"info","ts":"2022-07-28T23:01:54.538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20220728160133-12923 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:01:54.539Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T23:01:54.539Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T23:01:54.539Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-07-28T23:01:54.539Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:01:54.539Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:01:54.540Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:01:54.540Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T23:02:17.127Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-28T23:02:17.127Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220728160133-12923","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/07/28 23:02:17 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/28 23:02:17 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-28T23:02:17.138Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-07-28T23:02:17.139Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T23:02:17.141Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T23:02:17.141Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220728160133-12923","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  23:03:24 up  1:24,  0 users,  load average: 0.98, 0.93, 1.01
	Linux newest-cni-20220728160133-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [33001086c552] <==
	* I0728 23:02:43.422612       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0728 23:02:43.423906       1 cache.go:39] Caches are synced for autoregister controller
	I0728 23:02:43.424245       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0728 23:02:43.427368       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0728 23:02:44.093060       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0728 23:02:44.325674       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0728 23:02:44.458764       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 23:02:44.458854       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0728 23:02:44.458861       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 23:02:44.458878       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 23:02:44.458967       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0728 23:02:44.459921       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0728 23:02:44.733595       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0728 23:02:44.741395       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0728 23:02:44.770106       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0728 23:02:44.781554       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0728 23:02:44.785636       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0728 23:02:45.223115       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0728 23:02:46.587334       1 controller.go:611] quota admission added evaluator for: namespaces
	I0728 23:02:46.649203       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.97.208.140]
	I0728 23:02:46.657361       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.96.105.19]
	I0728 23:03:21.562447       1 controller.go:611] quota admission added evaluator for: endpoints
	I0728 23:03:21.570846       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0728 23:03:21.767825       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [857d35ea3c0e] <==
	* W0728 23:02:26.362674       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.382486       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.419982       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.453233       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.456753       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.553978       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.577606       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.697954       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.721726       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.739554       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.739596       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.804325       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.849579       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.872432       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.884017       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.884110       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.896150       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.941669       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.963266       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.983682       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:27.041592       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:27.089614       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:27.098391       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:27.101917       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:27.194138       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [6795e4facb30] <==
	* I0728 23:02:13.204442       1 shared_informer.go:262] Caches are synced for cronjob
	I0728 23:02:13.255893       1 shared_informer.go:262] Caches are synced for endpoint
	I0728 23:02:13.255977       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0728 23:02:13.256175       1 shared_informer.go:262] Caches are synced for persistent volume
	I0728 23:02:13.304298       1 shared_informer.go:262] Caches are synced for taint
	I0728 23:02:13.304479       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0728 23:02:13.304826       1 node_lifecycle_controller.go:1014] Missing timestamp for Node newest-cni-20220728160133-12923. Assuming now as a timestamp.
	I0728 23:02:13.304967       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0728 23:02:13.304643       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0728 23:02:13.304895       1 event.go:294] "Event occurred" object="newest-cni-20220728160133-12923" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220728160133-12923 event: Registered Node newest-cni-20220728160133-12923 in Controller"
	I0728 23:02:13.308668       1 shared_informer.go:262] Caches are synced for resource quota
	I0728 23:02:13.312934       1 shared_informer.go:262] Caches are synced for resource quota
	I0728 23:02:13.720318       1 shared_informer.go:262] Caches are synced for garbage collector
	I0728 23:02:13.778055       1 shared_informer.go:262] Caches are synced for garbage collector
	I0728 23:02:13.778090       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0728 23:02:13.812014       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7jx99"
	I0728 23:02:13.858513       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0728 23:02:13.917496       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0728 23:02:14.108686       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-prc72"
	I0728 23:02:14.112586       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-fg5bv"
	I0728 23:02:14.125532       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-fg5bv"
	I0728 23:02:16.429389       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0728 23:02:16.433402       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0728 23:02:16.438677       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0728 23:02:16.442091       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-h4qvh"
	
	* 
	* ==> kube-controller-manager [edc88662680f] <==
	* I0728 23:03:21.571665       1 shared_informer.go:262] Caches are synced for attach detach
	I0728 23:03:21.576079       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0728 23:03:21.576369       1 shared_informer.go:262] Caches are synced for node
	I0728 23:03:21.576489       1 range_allocator.go:173] Starting range CIDR allocator
	I0728 23:03:21.576511       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0728 23:03:21.576379       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0728 23:03:21.576577       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0728 23:03:21.577447       1 shared_informer.go:262] Caches are synced for endpoint
	I0728 23:03:21.577797       1 shared_informer.go:262] Caches are synced for PVC protection
	I0728 23:03:21.594382       1 shared_informer.go:262] Caches are synced for stateful set
	I0728 23:03:21.660311       1 shared_informer.go:262] Caches are synced for daemon sets
	I0728 23:03:21.671827       1 shared_informer.go:262] Caches are synced for service account
	I0728 23:03:21.691271       1 shared_informer.go:262] Caches are synced for namespace
	I0728 23:03:21.757951       1 shared_informer.go:262] Caches are synced for deployment
	I0728 23:03:21.761373       1 shared_informer.go:262] Caches are synced for disruption
	I0728 23:03:21.761408       1 disruption.go:371] Sending events to api server.
	I0728 23:03:21.771222       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0728 23:03:21.771259       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0728 23:03:21.779788       1 shared_informer.go:262] Caches are synced for resource quota
	I0728 23:03:21.814461       1 shared_informer.go:262] Caches are synced for resource quota
	I0728 23:03:21.868938       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-f9884"
	I0728 23:03:21.868959       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-vkblv"
	I0728 23:03:22.194547       1 shared_informer.go:262] Caches are synced for garbage collector
	I0728 23:03:22.194580       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0728 23:03:22.263748       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [56bcddf7d2fa] <==
	* I0728 23:02:45.059312       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0728 23:02:45.059516       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0728 23:02:45.059873       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 23:02:45.214125       1 server_others.go:206] "Using iptables Proxier"
	I0728 23:02:45.214200       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 23:02:45.214213       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 23:02:45.214239       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 23:02:45.214285       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 23:02:45.216569       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 23:02:45.217040       1 server.go:661] "Version info" version="v1.24.3"
	I0728 23:02:45.217296       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 23:02:45.218254       1 config.go:317] "Starting service config controller"
	I0728 23:02:45.219411       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 23:02:45.218726       1 config.go:226] "Starting endpoint slice config controller"
	I0728 23:02:45.219488       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 23:02:45.219567       1 config.go:444] "Starting node config controller"
	I0728 23:02:45.219602       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 23:02:45.320635       1 shared_informer.go:262] Caches are synced for node config
	I0728 23:02:45.320658       1 shared_informer.go:262] Caches are synced for service config
	I0728 23:02:45.320929       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [89e2d6baa776] <==
	* I0728 23:02:14.393528       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0728 23:02:14.393595       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0728 23:02:14.393637       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 23:02:14.552552       1 server_others.go:206] "Using iptables Proxier"
	I0728 23:02:14.552608       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 23:02:14.552618       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 23:02:14.552628       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 23:02:14.552659       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 23:02:14.552768       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 23:02:14.554065       1 server.go:661] "Version info" version="v1.24.3"
	I0728 23:02:14.554141       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 23:02:14.555129       1 config.go:317] "Starting service config controller"
	I0728 23:02:14.555184       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 23:02:14.555203       1 config.go:226] "Starting endpoint slice config controller"
	I0728 23:02:14.555206       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 23:02:14.555896       1 config.go:444] "Starting node config controller"
	I0728 23:02:14.555905       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 23:02:14.656315       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 23:02:14.656352       1 shared_informer.go:262] Caches are synced for service config
	I0728 23:02:14.656468       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b788ccc753b1] <==
	* W0728 23:02:40.206931       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0728 23:02:40.757435       1 serving.go:348] Generated self-signed cert in-memory
	W0728 23:02:43.325546       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0728 23:02:43.325583       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0728 23:02:43.325590       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0728 23:02:43.325595       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0728 23:02:43.335841       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0728 23:02:43.335874       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 23:02:43.339632       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0728 23:02:43.339709       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0728 23:02:43.339915       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 23:02:43.339719       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0728 23:02:43.442240       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f41fe719b176] <==
	* E0728 23:01:56.950695       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0728 23:01:56.950097       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0728 23:01:56.950796       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0728 23:01:56.950176       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0728 23:01:56.950842       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0728 23:01:57.757064       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0728 23:01:57.757215       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0728 23:01:57.863738       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0728 23:01:57.863801       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0728 23:01:57.865849       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0728 23:01:57.865894       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0728 23:01:57.869856       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0728 23:01:57.869887       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0728 23:01:57.877291       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0728 23:01:57.877336       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0728 23:01:57.994729       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0728 23:01:57.994765       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0728 23:01:58.065876       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0728 23:01:58.065961       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0728 23:01:58.094855       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0728 23:01:58.094891       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0728 23:01:58.446860       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 23:02:17.206740       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0728 23:02:17.206775       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0728 23:02:17.207992       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 23:02:31 UTC, end at Thu 2022-07-28 23:03:26 UTC. --
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         ]
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:  > pod="kube-system/coredns-6d4b75cb6d-prc72"
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]: E0728 23:03:26.077949    3514 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6d4b75cb6d-prc72_kube-system(d43bde09-e312-4bea-952e-daf9ee264c55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6d4b75cb6d-prc72_kube-system(d43bde09-e312-4bea-952e-daf9ee264c55)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"bd1703b60ad8bb63335e7696a1ce34f088a155c44c34c53eec18fcc6d81c6885\\\" network for pod \\\"coredns-6d4b75cb6d-prc72\\\": networkPlugin cni failed to set up pod \\\"coredns-6d4b75cb6d-prc72_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"bd1703b60ad8bb63335e7696a1ce34f088a155c44c34c53eec18fcc6d81c6885\\\" network for pod \\\"coredns-6d4b75cb6d-prc72\\\": networkPlugin cni failed to teardown pod \\\"coredns-6d4b75cb6d-prc72_kub
e-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.12 -j CNI-1ddf9e8e5cfc41f13ae7f52d -m comment --comment name: \\\"crio\\\" id: \\\"bd1703b60ad8bb63335e7696a1ce34f088a155c44c34c53eec18fcc6d81c6885\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-1ddf9e8e5cfc41f13ae7f52d':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/coredns-6d4b75cb6d-prc72" podUID=d43bde09-e312-4bea-952e-daf9ee264c55
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]: E0728 23:03:26.333379    3514 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err=<
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         rpc error: code = Unknown desc = [failed to set up sandbox container "ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047" network for pod "metrics-server-5c6f97fb75-h4qvh": networkPlugin cni failed to set up pod "metrics-server-5c6f97fb75-h4qvh_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047" network for pod "metrics-server-5c6f97fb75-h4qvh": networkPlugin cni failed to teardown pod "metrics-server-5c6f97fb75-h4qvh_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.13 -j CNI-c5a40a849b461ebb596a36c7 -m comment --comment name: "crio" id: "ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-c5a40a849b461ebb596a36c7':No such file or directory
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         ]
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:  >
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]: E0728 23:03:26.333598    3514 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=<
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         rpc error: code = Unknown desc = [failed to set up sandbox container "ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047" network for pod "metrics-server-5c6f97fb75-h4qvh": networkPlugin cni failed to set up pod "metrics-server-5c6f97fb75-h4qvh_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047" network for pod "metrics-server-5c6f97fb75-h4qvh": networkPlugin cni failed to teardown pod "metrics-server-5c6f97fb75-h4qvh_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.13 -j CNI-c5a40a849b461ebb596a36c7 -m comment --comment name: "crio" id: "ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-c5a40a849b461ebb596a36c7':No such file or directory
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         ]
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:  > pod="kube-system/metrics-server-5c6f97fb75-h4qvh"
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]: E0728 23:03:26.333750    3514 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err=<
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         rpc error: code = Unknown desc = [failed to set up sandbox container "ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047" network for pod "metrics-server-5c6f97fb75-h4qvh": networkPlugin cni failed to set up pod "metrics-server-5c6f97fb75-h4qvh_kube-system" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047" network for pod "metrics-server-5c6f97fb75-h4qvh": networkPlugin cni failed to teardown pod "metrics-server-5c6f97fb75-h4qvh_kube-system" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.13 -j CNI-c5a40a849b461ebb596a36c7 -m comment --comment name: "crio" id: "ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-c5a40a849b461ebb596a36c7':No such file or directory
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:         ]
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]:  > pod="kube-system/metrics-server-5c6f97fb75-h4qvh"
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]: E0728 23:03:26.333950    3514 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"metrics-server-5c6f97fb75-h4qvh_kube-system(43e62268-70c1-4a6e-9411-0ab4fa1c30f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"metrics-server-5c6f97fb75-h4qvh_kube-system(43e62268-70c1-4a6e-9411-0ab4fa1c30f9)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047\\\" network for pod \\\"metrics-server-5c6f97fb75-h4qvh\\\": networkPlugin cni failed to set up pod \\\"metrics-server-5c6f97fb75-h4qvh_kube-system\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047\\\" network for pod \\\"metrics-server-5c6f97fb75-h4qvh\\\": networkPlugin cni failed to teardown p
od \\\"metrics-server-5c6f97fb75-h4qvh_kube-system\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.13 -j CNI-c5a40a849b461ebb596a36c7 -m comment --comment name: \\\"crio\\\" id: \\\"ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-c5a40a849b461ebb596a36c7':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kube-system/metrics-server-5c6f97fb75-h4qvh" podUID=43e62268-70c1-4a6e-9411-0ab4fa1c30f9
	Jul 28 23:03:26 newest-cni-20220728160133-12923 kubelet[3514]: I0728 23:03:26.348271    3514 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="bd1703b60ad8bb63335e7696a1ce34f088a155c44c34c53eec18fcc6d81c6885"
	
	* 
	* ==> storage-provisioner [530b6eb6d7c7] <==
	* I0728 23:02:16.179661       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 23:02:16.189621       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 23:02:16.189732       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 23:02:16.198129       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 23:02:16.198212       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca1f46b7-9e6c-4222-b687-d530a192b3a5", APIVersion:"v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220728160133-12923_a9ebf106-3c81-4a79-892c-578d732a17e8 became leader
	I0728 23:02:16.198330       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220728160133-12923_a9ebf106-3c81-4a79-892c-578d732a17e8!
	I0728 23:02:16.299644       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220728160133-12923_a9ebf106-3c81-4a79-892c-578d732a17e8!
	
	* 
	* ==> storage-provisioner [8ae7c37eff3f] <==
	* I0728 23:02:46.103712       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 23:02:46.180010       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 23:02:46.180138       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 23:03:21.564963       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 23:03:21.565309       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220728160133-12923_199b279e-58c3-4ae1-9d38-ee57a26c704b!
	I0728 23:03:21.565296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca1f46b7-9e6c-4222-b687-d530a192b3a5", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220728160133-12923_199b279e-58c3-4ae1-9d38-ee57a26c704b became leader
	I0728 23:03:21.666061       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220728160133-12923_199b279e-58c3-4ae1-9d38-ee57a26c704b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220728160133-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Done: kubectl --context newest-cni-20220728160133-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.016298773s)
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-prc72 metrics-server-5c6f97fb75-h4qvh dashboard-metrics-scraper-dffd48c4c-vkblv kubernetes-dashboard-5fd5574d9f-f9884
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220728160133-12923 describe pod coredns-6d4b75cb6d-prc72 metrics-server-5c6f97fb75-h4qvh dashboard-metrics-scraper-dffd48c4c-vkblv kubernetes-dashboard-5fd5574d9f-f9884
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220728160133-12923 describe pod coredns-6d4b75cb6d-prc72 metrics-server-5c6f97fb75-h4qvh dashboard-metrics-scraper-dffd48c4c-vkblv kubernetes-dashboard-5fd5574d9f-f9884: exit status 1 (193.038604ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-prc72" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-h4qvh" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-vkblv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-f9884" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220728160133-12923 describe pod coredns-6d4b75cb6d-prc72 metrics-server-5c6f97fb75-h4qvh dashboard-metrics-scraper-dffd48c4c-vkblv kubernetes-dashboard-5fd5574d9f-f9884: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-20220728160133-12923
helpers_test.go:235: (dbg) docker inspect newest-cni-20220728160133-12923:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dcabc2c4ee37df52f3c5b5e3f8d63bfbb1ae84be73e5a5a6355b3d22891b8d69",
	        "Created": "2022-07-28T23:01:40.749675454Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 314275,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-07-28T23:02:30.872182084Z",
	            "FinishedAt": "2022-07-28T23:02:28.845184509Z"
	        },
	        "Image": "sha256:443d84da239e4e701685e1614ef94cd6b60d0f0b15265a51d4f657992a9c59d8",
	        "ResolvConfPath": "/var/lib/docker/containers/dcabc2c4ee37df52f3c5b5e3f8d63bfbb1ae84be73e5a5a6355b3d22891b8d69/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dcabc2c4ee37df52f3c5b5e3f8d63bfbb1ae84be73e5a5a6355b3d22891b8d69/hostname",
	        "HostsPath": "/var/lib/docker/containers/dcabc2c4ee37df52f3c5b5e3f8d63bfbb1ae84be73e5a5a6355b3d22891b8d69/hosts",
	        "LogPath": "/var/lib/docker/containers/dcabc2c4ee37df52f3c5b5e3f8d63bfbb1ae84be73e5a5a6355b3d22891b8d69/dcabc2c4ee37df52f3c5b5e3f8d63bfbb1ae84be73e5a5a6355b3d22891b8d69-json.log",
	        "Name": "/newest-cni-20220728160133-12923",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-20220728160133-12923:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-20220728160133-12923",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d2a0d5f4912188452a307c08e6218ac8dcaf03602008b94e8b8286defe40f9ff-init/diff:/var/lib/docker/overlay2/056596d2929957d3e3e5b833685acff2b231f40eb9a087a59341c3c300a1b50e/diff:/var/lib/docker/overlay2/aeeecdb6606dd9393c4cb99ce4808b7bc24a6ef0097f307d6fed1bc526f97dc4/diff:/var/lib/docker/overlay2/e07e57252d473c4cf4585dd0b9e432d12de28ab92afbbfb37f7665912e11fc18/diff:/var/lib/docker/overlay2/bbac83d2502e96ec5da00604b65463367cd22f277fe6f609360044fb0f9e1377/diff:/var/lib/docker/overlay2/87e23b65710d2a58c9173bc6e4d8b8ffd791c437e43a6f436592bd31957aabec/diff:/var/lib/docker/overlay2/66ff02810cb15e198965336352c77a7ddfed37b38eabcb9c06bcf16d38254863/diff:/var/lib/docker/overlay2/54bab0cb140c2be9a41713079493f675d8c09dad36e2723ba0d090675c6c3a4a/diff:/var/lib/docker/overlay2/8b8e155fa64f1976f5bd773e5fd1f752349548f540f72a6f97626b319a6628c4/diff:/var/lib/docker/overlay2/288f61b1f03f49e0f468d48a99b71c5dda52c62497376c20e4062e00e0e212b0/diff:/var/lib/docker/overlay2/7ffdb6
eab366aa778a281545eab06c14f736685bcedd528425f7239d4182e3d6/diff:/var/lib/docker/overlay2/8084a5e10489b1f3c8307655c75072b74006e27dc9000fe9e4cd5a894ed2c5d0/diff:/var/lib/docker/overlay2/b22caa19afdac0befabc36e685db869ca09aacdfa3b05d0e6ed6509088428d5c/diff:/var/lib/docker/overlay2/da177b4c55e195e7dbd4bcfac4164d946999086a0b3f6cebea03d13078c5b054/diff:/var/lib/docker/overlay2/fed1ccce88d43bca8f3a608efe70e9c1bf5d7e0316a3b44092c51dcb76407869/diff:/var/lib/docker/overlay2/0deef43118296a1cdca3fdbb73d71fa36221439d43819f9d382585d4bb88e514/diff:/var/lib/docker/overlay2/36703718b3a187ca182c33b4a6196384bc12465f45284242445ef1c911522a3a/diff:/var/lib/docker/overlay2/d5495158f0841c57fe30745169420c5874add7c8753c5184c81c0eca278b7663/diff:/var/lib/docker/overlay2/b8e24e1ded71e0c81d302dd42229a8e6ecc6b70249b773e7f31c5e384bfbe807/diff:/var/lib/docker/overlay2/255faf81b7577b32278f8d91ac6c5fa13f016ce25c57f27a74ab25966c6aef1c/diff:/var/lib/docker/overlay2/b56171b260f9801a2b7aff8a07cb1feaebeaf99662ca0f1ecab49d06546da75a/diff:/var/lib/d
ocker/overlay2/c28df73f923fc29cb544712edb8520a508e27fbf70089902b11049fca8b16cce/diff:/var/lib/docker/overlay2/d33e1f8d5d5405cbf2ffb03b86e0627c34bf11c04ae68eb2e3bfdd718caf9aa4/diff:/var/lib/docker/overlay2/b8da86470b0957b347d26133bf0a33ac606fe96785e6360b4f04523256656b52/diff:/var/lib/docker/overlay2/dc2546d770c4cc6caae497560c40bc3e264046b42ed741118864be97a3dae436/diff:/var/lib/docker/overlay2/ccbb4f6660f8f577ace2d82cf7f55a1839a4fed8e177a672115b9606da1e39d6/diff:/var/lib/docker/overlay2/0fa957ad848132aca392d60e78058506e8df61bebe0a7e396d1d2915dc9f432d/diff:/var/lib/docker/overlay2/b7977f3de7aaa2ed3410442ce34e3a453c594428adc88b3a534b2956afe799ce/diff:/var/lib/docker/overlay2/ed33072eb8076fa8fe0c9fdb0a3de4e4bda337c4dfd23bb26223dd09570e2d20/diff:/var/lib/docker/overlay2/0cdef085db86a9c6f9657c4297d14a366fd816b8bdd2efd31970c0f70b87a490/diff:/var/lib/docker/overlay2/74cd47bcc28085a71facaa0f36ee9279e40db7efc4256190e05a4e938156c446/diff:/var/lib/docker/overlay2/a4e9b207a11193fc51296a1c4f8460cf573030f090caa5c97931b239d05
9732f/diff:/var/lib/docker/overlay2/f9c465c3e0ea0efb497df00981b3bff779c6baafbf2ab7a5a2584ba86a98eb7d/diff:/var/lib/docker/overlay2/11b9edd42ce56b4368a5179b4ab24cad8119edce94b34805aa7d32583ebb83a1/diff:/var/lib/docker/overlay2/e7a5451e6dd38cb177abc0ddc9d8ec6fc06f6f84b64c6a327e5bb3f3861e9f50/diff:/var/lib/docker/overlay2/6acbb8b5586c22d8819fa398bd3eabdebffad58c08685b439c7a610a0d0529ce/diff:/var/lib/docker/overlay2/57c0cdd0c15b962e55c5c26017708753503d33944831570b135a54e8c181e20a/diff:/var/lib/docker/overlay2/bf658887a041fdb93d1609e296e154bccfc5e0d0e222a5b72f917b85be40fbd8/diff:/var/lib/docker/overlay2/df3b97a9157b26b4b7ba94b94a2a64ff57b161f00247adc2ac33e4278f44ff77/diff:/var/lib/docker/overlay2/d89e345666b7dbd89c5ddff8f20f1d5a4889dbfe34585f220f881e95575fcaa1/diff:/var/lib/docker/overlay2/674ee03596c158982d0fb07646f4791c02de0938e0f732c546d2f668426fce13/diff:/var/lib/docker/overlay2/5665cbe7dda9368c88dc12d81c21851029f12099d4ac4a45dccb0ca694352e02/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d2a0d5f4912188452a307c08e6218ac8dcaf03602008b94e8b8286defe40f9ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d2a0d5f4912188452a307c08e6218ac8dcaf03602008b94e8b8286defe40f9ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d2a0d5f4912188452a307c08e6218ac8dcaf03602008b94e8b8286defe40f9ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-20220728160133-12923",
	                "Source": "/var/lib/docker/volumes/newest-cni-20220728160133-12923/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-20220728160133-12923",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-20220728160133-12923",
	                "name.minikube.sigs.k8s.io": "newest-cni-20220728160133-12923",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "751e7a20ba9581c1ddb1c3632ec1be0600dd665b3d47531e2685839174999367",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60086"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60083"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60084"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "60085"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/751e7a20ba95",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-20220728160133-12923": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dcabc2c4ee37",
	                        "newest-cni-20220728160133-12923"
	                    ],
	                    "NetworkID": "19d83e1f6d4f1918b5783e54f41e13d36ffa4cd02bc94de9d9a4d43b1c6abd02",
	                    "EndpointID": "b628b301c35e794d38d387dd97dcb878d4893624570dcec09e169129fd232420",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-20220728160133-12923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-20220728160133-12923 logs -n 25: (4.927908499s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:48 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                     |                     |
	|         | --wait=true --embed-certs                                  |                                                 |         |         |                     |                     |
	|         | --driver=docker                                            |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:53 PDT | 28 Jul 22 15:53 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | embed-certs-20220728154707-12923                | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | embed-certs-20220728154707-12923                           |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | disable-driver-mounts-20220728155419-12923      | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:54 PDT |
	|         | disable-driver-mounts-20220728155419-12923                 |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:54 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 15:55 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 15:55 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                 |         |         |                     |                     |
	|         | --apiserver-port=8444 --driver=docker                      |                                                 |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.3                               |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:00 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:00 PDT | 28 Jul 22 16:00 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:01 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:01 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-different-port-20220728155420-12923 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:01 PDT |
	|         | default-k8s-different-port-20220728155420-12923            |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220728160133-12923 --memory=2200           | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:01 PDT | 28 Jul 22 16:02 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                     |                     |
	| stop    | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                     |                     |
	| addons  | enable dashboard -p                                        | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                     |                     |
	| start   | -p newest-cni-20220728160133-12923 --memory=2200           | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.24.3              |                                                 |         |         |                     |                     |
	| ssh     | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                     |                     |
	| pause   | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:02 PDT | 28 Jul 22 16:02 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	| unpause | -p                                                         | newest-cni-20220728160133-12923                 | jenkins | v1.26.0 | 28 Jul 22 16:03 PDT | 28 Jul 22 16:03 PDT |
	|         | newest-cni-20220728160133-12923                            |                                                 |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                     |                     |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 16:02:29
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 16:02:29.603987   31212 out.go:296] Setting OutFile to fd 1 ...
	I0728 16:02:29.604169   31212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 16:02:29.604175   31212 out.go:309] Setting ErrFile to fd 2...
	I0728 16:02:29.604179   31212 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 16:02:29.604278   31212 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 16:02:29.604724   31212 out.go:303] Setting JSON to false
	I0728 16:02:29.620611   31212 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":10391,"bootTime":1659038958,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 16:02:29.620715   31212 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 16:02:29.642836   31212 out.go:177] * [newest-cni-20220728160133-12923] minikube v1.26.0 on Darwin 12.5
	I0728 16:02:29.685952   31212 notify.go:193] Checking for updates...
	I0728 16:02:29.707624   31212 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 16:02:29.729048   31212 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 16:02:29.751059   31212 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 16:02:29.772938   31212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 16:02:29.794784   31212 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 16:02:29.817184   31212 config.go:178] Loaded profile config "newest-cni-20220728160133-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 16:02:29.817825   31212 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 16:02:29.884824   31212 docker.go:137] docker version: linux-20.10.17
	I0728 16:02:29.884958   31212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 16:02:30.016461   31212 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 23:02:29.951122434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 16:02:30.058952   31212 out.go:177] * Using the docker driver based on existing profile
	I0728 16:02:30.081202   31212 start.go:284] selected driver: docker
	I0728 16:02:30.081227   31212 start.go:808] validating driver "docker" against &{Name:newest-cni-20220728160133-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220728160133-12923 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:tru
e extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 16:02:30.081399   31212 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 16:02:30.084734   31212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 16:02:30.215852   31212 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2022-07-28 23:02:30.151469441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 16:02:30.216047   31212 start_flags.go:872] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0728 16:02:30.216068   31212 cni.go:95] Creating CNI manager for ""
	I0728 16:02:30.216080   31212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 16:02:30.216095   31212 start_flags.go:310] config:
	{Name:newest-cni-20220728160133-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220728160133-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 16:02:30.258426   31212 out.go:177] * Starting control plane node newest-cni-20220728160133-12923 in cluster newest-cni-20220728160133-12923
	I0728 16:02:30.279453   31212 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 16:02:30.300285   31212 out.go:177] * Pulling base image ...
	I0728 16:02:30.342972   31212 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 16:02:30.343034   31212 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 16:02:30.343067   31212 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 16:02:30.343091   31212 cache.go:57] Caching tarball of preloaded images
	I0728 16:02:30.343264   31212 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0728 16:02:30.343286   31212 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.3 on docker
	I0728 16:02:30.344223   31212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/config.json ...
	I0728 16:02:30.408173   31212 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon, skipping pull
	I0728 16:02:30.408192   31212 cache.go:142] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in daemon, skipping load
	I0728 16:02:30.408203   31212 cache.go:208] Successfully downloaded all kic artifacts
	I0728 16:02:30.408274   31212 start.go:370] acquiring machines lock for newest-cni-20220728160133-12923: {Name:mkde4349139da57471ff6865c409a88cc56837e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 16:02:30.408353   31212 start.go:374] acquired machines lock for "newest-cni-20220728160133-12923" in 60.272µs
	I0728 16:02:30.408375   31212 start.go:95] Skipping create...Using existing machine configuration
	I0728 16:02:30.408384   31212 fix.go:55] fixHost starting: 
	I0728 16:02:30.408637   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:30.472018   31212 fix.go:103] recreateIfNeeded on newest-cni-20220728160133-12923: state=Stopped err=<nil>
	W0728 16:02:30.472048   31212 fix.go:129] unexpected machine state, will restart: <nil>
	I0728 16:02:30.515834   31212 out.go:177] * Restarting existing docker container for "newest-cni-20220728160133-12923" ...
	I0728 16:02:30.537952   31212 cli_runner.go:164] Run: docker start newest-cni-20220728160133-12923
	I0728 16:02:30.869264   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:30.933448   31212 kic.go:415] container "newest-cni-20220728160133-12923" state is running.
	I0728 16:02:30.934009   31212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220728160133-12923
	I0728 16:02:31.002987   31212 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/config.json ...
	I0728 16:02:31.003389   31212 machine.go:88] provisioning docker machine ...
	I0728 16:02:31.003411   31212 ubuntu.go:169] provisioning hostname "newest-cni-20220728160133-12923"
	I0728 16:02:31.003476   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:31.070544   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:31.070746   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:31.070761   31212 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220728160133-12923 && echo "newest-cni-20220728160133-12923" | sudo tee /etc/hostname
	I0728 16:02:31.201548   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220728160133-12923
	
	I0728 16:02:31.201634   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:31.266510   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:31.266676   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:31.266694   31212 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220728160133-12923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220728160133-12923/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220728160133-12923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0728 16:02:31.385812   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 16:02:31.385833   31212 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube}
	I0728 16:02:31.385867   31212 ubuntu.go:177] setting up certificates
	I0728 16:02:31.385874   31212 provision.go:83] configureAuth start
	I0728 16:02:31.385943   31212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220728160133-12923
	I0728 16:02:31.451614   31212 provision.go:138] copyHostCerts
	I0728 16:02:31.451705   31212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem, removing ...
	I0728 16:02:31.451715   31212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem
	I0728 16:02:31.451804   31212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/key.pem (1679 bytes)
	I0728 16:02:31.452003   31212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem, removing ...
	I0728 16:02:31.452013   31212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem
	I0728 16:02:31.452087   31212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.pem (1082 bytes)
	I0728 16:02:31.452228   31212 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem, removing ...
	I0728 16:02:31.452233   31212 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem
	I0728 16:02:31.452288   31212 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cert.pem (1123 bytes)
	I0728 16:02:31.452406   31212 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220728160133-12923 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220728160133-12923]
	I0728 16:02:31.570157   31212 provision.go:172] copyRemoteCerts
	I0728 16:02:31.570217   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0728 16:02:31.570260   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:31.633580   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:31.718874   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0728 16:02:31.736262   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0728 16:02:31.753572   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0728 16:02:31.770074   31212 provision.go:86] duration metric: configureAuth took 384.191515ms
	I0728 16:02:31.770089   31212 ubuntu.go:193] setting minikube options for container-runtime
	I0728 16:02:31.770242   31212 config.go:178] Loaded profile config "newest-cni-20220728160133-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 16:02:31.770295   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:31.834721   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:31.834895   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:31.834906   31212 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0728 16:02:31.957197   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0728 16:02:31.957210   31212 ubuntu.go:71] root file system type: overlay
	I0728 16:02:31.957365   31212 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0728 16:02:31.957437   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.021693   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:32.021854   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:32.021903   31212 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0728 16:02:32.149282   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0728 16:02:32.149364   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.213109   31212 main.go:134] libmachine: Using SSH client type: native
	I0728 16:02:32.213274   31212 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13d2e20] 0x13d5e80 <nil>  [] 0s} 127.0.0.1 60086 <nil> <nil>}
	I0728 16:02:32.213289   31212 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0728 16:02:32.338244   31212 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0728 16:02:32.338264   31212 machine.go:91] provisioned docker machine in 1.334887211s
	I0728 16:02:32.338275   31212 start.go:307] post-start starting for "newest-cni-20220728160133-12923" (driver="docker")
	I0728 16:02:32.338281   31212 start.go:334] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0728 16:02:32.338339   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0728 16:02:32.338391   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.401686   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:32.486912   31212 ssh_runner.go:195] Run: cat /etc/os-release
	I0728 16:02:32.490645   31212 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0728 16:02:32.490662   31212 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0728 16:02:32.490677   31212 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0728 16:02:32.490684   31212 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0728 16:02:32.490696   31212 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/addons for local assets ...
	I0728 16:02:32.490812   31212 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files for local assets ...
	I0728 16:02:32.490953   31212 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem -> 129232.pem in /etc/ssl/certs
	I0728 16:02:32.491096   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0728 16:02:32.498304   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /etc/ssl/certs/129232.pem (1708 bytes)
	I0728 16:02:32.516144   31212 start.go:310] post-start completed in 177.863755ms
	I0728 16:02:32.516223   31212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 16:02:32.516271   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.580139   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:32.665740   31212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0728 16:02:32.670469   31212 fix.go:57] fixHost completed within 2.262122262s
	I0728 16:02:32.670481   31212 start.go:82] releasing machines lock for "newest-cni-20220728160133-12923", held for 2.262159415s
	I0728 16:02:32.670550   31212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220728160133-12923
	I0728 16:02:32.734665   31212 ssh_runner.go:195] Run: systemctl --version
	I0728 16:02:32.734676   31212 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0728 16:02:32.734725   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.734770   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:32.803568   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:32.803742   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:33.085349   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0728 16:02:33.093438   31212 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I0728 16:02:33.105957   31212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 16:02:33.176841   31212 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0728 16:02:33.256944   31212 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0728 16:02:33.267572   31212 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0728 16:02:33.267647   31212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0728 16:02:33.277112   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0728 16:02:33.289609   31212 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0728 16:02:33.356813   31212 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0728 16:02:33.432618   31212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 16:02:33.497815   31212 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0728 16:02:33.738564   31212 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0728 16:02:33.802407   31212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0728 16:02:33.879275   31212 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I0728 16:02:33.888513   31212 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0728 16:02:33.888576   31212 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0728 16:02:33.892665   31212 start.go:471] Will wait 60s for crictl version
	I0728 16:02:33.892723   31212 ssh_runner.go:195] Run: sudo crictl version
	I0728 16:02:33.921786   31212 start.go:480] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.17
	RuntimeApiVersion:  1.41.0
	I0728 16:02:33.921853   31212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 16:02:33.960795   31212 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0728 16:02:34.049447   31212 out.go:204] * Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
	I0728 16:02:34.049621   31212 cli_runner.go:164] Run: docker exec -t newest-cni-20220728160133-12923 dig +short host.docker.internal
	I0728 16:02:34.171213   31212 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0728 16:02:34.171311   31212 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0728 16:02:34.175390   31212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 16:02:34.184506   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:34.270359   31212 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0728 16:02:34.292818   31212 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 16:02:34.292978   31212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 16:02:34.324733   31212 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 16:02:34.324750   31212 docker.go:542] Images already preloaded, skipping extraction
	I0728 16:02:34.324839   31212 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0728 16:02:34.354027   31212 docker.go:611] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.3
	k8s.gcr.io/kube-controller-manager:v1.24.3
	k8s.gcr.io/kube-proxy:v1.24.3
	k8s.gcr.io/kube-scheduler:v1.24.3
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0728 16:02:34.354046   31212 cache_images.go:84] Images are preloaded, skipping loading
	I0728 16:02:34.354131   31212 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0728 16:02:34.426471   31212 cni.go:95] Creating CNI manager for ""
	I0728 16:02:34.426483   31212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 16:02:34.426497   31212 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0728 16:02:34.426516   31212 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220728160133-12923 NodeName:newest-cni-20220728160133-12923 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:fal
se] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0728 16:02:34.426649   31212 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-20220728160133-12923"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0728 16:02:34.426743   31212 kubeadm.go:961] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220728160133-12923 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220728160133-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0728 16:02:34.426805   31212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
	I0728 16:02:34.433968   31212 binaries.go:44] Found k8s binaries, skipping transfer
	I0728 16:02:34.434013   31212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0728 16:02:34.440954   31212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (530 bytes)
	I0728 16:02:34.453330   31212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0728 16:02:34.465174   31212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2189 bytes)
	I0728 16:02:34.477464   31212 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0728 16:02:34.481122   31212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0728 16:02:34.490283   31212 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923 for IP: 192.168.67.2
	I0728 16:02:34.490387   31212 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key
	I0728 16:02:34.490437   31212 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key
	I0728 16:02:34.490514   31212 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/client.key
	I0728 16:02:34.490573   31212 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/apiserver.key.c7fa3a9e
	I0728 16:02:34.490619   31212 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/proxy-client.key
	I0728 16:02:34.490812   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem (1338 bytes)
	W0728 16:02:34.490848   31212 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923_empty.pem, impossibly tiny 0 bytes
	I0728 16:02:34.490863   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca-key.pem (1679 bytes)
	I0728 16:02:34.490893   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/ca.pem (1082 bytes)
	I0728 16:02:34.490922   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/cert.pem (1123 bytes)
	I0728 16:02:34.490949   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/key.pem (1679 bytes)
	I0728 16:02:34.491006   31212 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem (1708 bytes)
	I0728 16:02:34.491520   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0728 16:02:34.507860   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0728 16:02:34.524173   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0728 16:02:34.540334   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/newest-cni-20220728160133-12923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0728 16:02:34.556938   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0728 16:02:34.573639   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0728 16:02:34.590137   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0728 16:02:34.607166   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0728 16:02:34.642640   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/ssl/certs/129232.pem --> /usr/share/ca-certificates/129232.pem (1708 bytes)
	I0728 16:02:34.659491   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0728 16:02:34.676033   31212 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/certs/12923.pem --> /usr/share/ca-certificates/12923.pem (1338 bytes)
	I0728 16:02:34.693030   31212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0728 16:02:34.705522   31212 ssh_runner.go:195] Run: openssl version
	I0728 16:02:34.710856   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0728 16:02:34.718744   31212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0728 16:02:34.722654   31212 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jul 28 21:40 /usr/share/ca-certificates/minikubeCA.pem
	I0728 16:02:34.722702   31212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0728 16:02:34.728027   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0728 16:02:34.735333   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12923.pem && ln -fs /usr/share/ca-certificates/12923.pem /etc/ssl/certs/12923.pem"
	I0728 16:02:34.743694   31212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12923.pem
	I0728 16:02:34.747931   31212 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jul 28 21:44 /usr/share/ca-certificates/12923.pem
	I0728 16:02:34.747982   31212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12923.pem
	I0728 16:02:34.753590   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12923.pem /etc/ssl/certs/51391683.0"
	I0728 16:02:34.763104   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129232.pem && ln -fs /usr/share/ca-certificates/129232.pem /etc/ssl/certs/129232.pem"
	I0728 16:02:34.770837   31212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129232.pem
	I0728 16:02:34.775515   31212 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jul 28 21:44 /usr/share/ca-certificates/129232.pem
	I0728 16:02:34.775562   31212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129232.pem
	I0728 16:02:34.780899   31212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129232.pem /etc/ssl/certs/3ec20f2e.0"
	I0728 16:02:34.789548   31212 kubeadm.go:395] StartCluster: {Name:newest-cni-20220728160133-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:newest-cni-20220728160133-12923 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.6.0@sha256:4af9580485920635d888efe1eddbd67e12f9d5d84dba87100e93feb4e46636b3 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubele
t:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 16:02:34.789678   31212 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 16:02:34.820043   31212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0728 16:02:34.828195   31212 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0728 16:02:34.828209   31212 kubeadm.go:626] restartCluster start
	I0728 16:02:34.828259   31212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0728 16:02:34.835176   31212 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:34.835231   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:34.899691   31212 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220728160133-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 16:02:34.899864   31212 kubeconfig.go:127] "newest-cni-20220728160133-12923" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig - will repair!
	I0728 16:02:34.900174   31212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:02:34.901373   31212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0728 16:02:34.908787   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:34.908845   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:34.917197   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.119337   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.119544   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.129792   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.319312   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.319477   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.331080   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.518938   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.519039   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.528435   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.719350   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.719616   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.730406   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:35.919401   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:35.919549   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:35.930377   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.119438   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.119541   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.130056   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.319337   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.319525   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.329785   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.519379   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.519516   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.529835   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.719302   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.719398   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.730179   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:36.917372   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:36.917496   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:36.928519   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.119347   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.119507   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.129922   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.319301   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.319531   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.330062   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.519308   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.519551   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.529927   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.717327   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.717423   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.726044   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.919322   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.919438   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.929738   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.929748   31212 api_server.go:165] Checking apiserver status ...
	I0728 16:02:37.929795   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0728 16:02:37.937878   31212 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:37.937891   31212 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0728 16:02:37.937898   31212 kubeadm.go:1092] stopping kube-system containers ...
	I0728 16:02:37.937949   31212 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0728 16:02:37.968534   31212 docker.go:443] Stopping containers: [df67d963fd1a e9afc45d6898 530b6eb6d7c7 97305fddcc4a 48abc99a84e2 bfe426170ea1 89e2d6baa776 c9e14b5f6cc9 ba52b06cc532 5e4060df4054 0ffd2a96459e 6795e4facb30 f41fe719b176 857d35ea3c0e 09e8e2df7716 75a84df66ff9 9624f7dfb492]
	I0728 16:02:37.968612   31212 ssh_runner.go:195] Run: docker stop df67d963fd1a e9afc45d6898 530b6eb6d7c7 97305fddcc4a 48abc99a84e2 bfe426170ea1 89e2d6baa776 c9e14b5f6cc9 ba52b06cc532 5e4060df4054 0ffd2a96459e 6795e4facb30 f41fe719b176 857d35ea3c0e 09e8e2df7716 75a84df66ff9 9624f7dfb492
	I0728 16:02:37.997807   31212 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0728 16:02:38.008050   31212 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0728 16:02:38.015217   31212 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 28 23:01 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 28 23:01 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Jul 28 23:01 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 28 23:01 /etc/kubernetes/scheduler.conf
	
	I0728 16:02:38.015267   31212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0728 16:02:38.022571   31212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0728 16:02:38.029680   31212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0728 16:02:38.036703   31212 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:38.036746   31212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0728 16:02:38.043830   31212 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0728 16:02:38.050885   31212 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0728 16:02:38.050935   31212 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0728 16:02:38.058048   31212 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0728 16:02:38.065404   31212 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0728 16:02:38.065414   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:38.111280   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:38.799836   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:38.977237   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:39.023772   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:39.067660   31212 api_server.go:51] waiting for apiserver process to appear ...
	I0728 16:02:39.067715   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:02:39.597308   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:02:40.096654   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:02:40.109739   31212 api_server.go:71] duration metric: took 1.042098966s to wait for apiserver process to appear ...
	I0728 16:02:40.109760   31212 api_server.go:87] waiting for apiserver healthz status ...
	I0728 16:02:40.109772   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:40.110868   31212 api_server.go:256] stopped: https://127.0.0.1:60085/healthz: Get "https://127.0.0.1:60085/healthz": EOF
	I0728 16:02:40.612933   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:43.301630   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0728 16:02:43.301646   31212 api_server.go:102] status: https://127.0.0.1:60085/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0728 16:02:43.611041   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:43.617933   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 16:02:43.617948   31212 api_server.go:102] status: https://127.0.0.1:60085/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 16:02:44.112038   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:44.118015   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0728 16:02:44.118027   31212 api_server.go:102] status: https://127.0.0.1:60085/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0728 16:02:44.610927   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:44.635752   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 200:
	ok
	I0728 16:02:44.642468   31212 api_server.go:140] control plane version: v1.24.3
	I0728 16:02:44.642485   31212 api_server.go:130] duration metric: took 4.532795077s to wait for apiserver health ...
	I0728 16:02:44.642492   31212 cni.go:95] Creating CNI manager for ""
	I0728 16:02:44.642496   31212 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 16:02:44.642514   31212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 16:02:44.649448   31212 system_pods.go:59] 8 kube-system pods found
	I0728 16:02:44.649469   31212 system_pods.go:61] "coredns-6d4b75cb6d-prc72" [d43bde09-e312-4bea-952e-daf9ee264c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0728 16:02:44.649476   31212 system_pods.go:61] "etcd-newest-cni-20220728160133-12923" [a6bb18a1-1ae7-40d9-a37c-2e5648f96e42] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 16:02:44.649481   31212 system_pods.go:61] "kube-apiserver-newest-cni-20220728160133-12923" [b6e1fc8f-718f-493d-b415-4216550c6c10] Running
	I0728 16:02:44.649487   31212 system_pods.go:61] "kube-controller-manager-newest-cni-20220728160133-12923" [4b232ec1-a4c1-4396-a592-cd9890a34eeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 16:02:44.649494   31212 system_pods.go:61] "kube-proxy-7jx99" [e05eddca-b40e-419b-b509-2a5b2523c7da] Running
	I0728 16:02:44.649498   31212 system_pods.go:61] "kube-scheduler-newest-cni-20220728160133-12923" [cce41ddb-daea-4fe7-8b7c-609cd384981a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 16:02:44.649504   31212 system_pods.go:61] "metrics-server-5c6f97fb75-h4qvh" [43e62268-70c1-4a6e-9411-0ab4fa1c30f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 16:02:44.649508   31212 system_pods.go:61] "storage-provisioner" [2dc9b609-a74c-428c-a22d-c26db32aacf1] Running
	I0728 16:02:44.649512   31212 system_pods.go:74] duration metric: took 6.9952ms to wait for pod list to return data ...
	I0728 16:02:44.649518   31212 node_conditions.go:102] verifying NodePressure condition ...
	I0728 16:02:44.652283   31212 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 16:02:44.652297   31212 node_conditions.go:123] node cpu capacity is 6
	I0728 16:02:44.652305   31212 node_conditions.go:105] duration metric: took 2.784314ms to run NodePressure ...
	I0728 16:02:44.652318   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0728 16:02:44.782768   31212 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0728 16:02:44.791228   31212 ops.go:34] apiserver oom_adj: -16
	I0728 16:02:44.791238   31212 kubeadm.go:630] restartCluster took 9.963191519s
	I0728 16:02:44.791245   31212 kubeadm.go:397] StartCluster complete in 10.001871582s
	I0728 16:02:44.791258   31212 settings.go:142] acquiring lock: {Name:mkecbd592635c192636e29756b725e41a999575d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:02:44.791330   31212 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 16:02:44.791949   31212 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig: {Name:mk9b88a9443d9edc0b455f19611f908884254f58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 16:02:44.795248   31212 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220728160133-12923" rescaled to 1
	I0728 16:02:44.795284   31212 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0728 16:02:44.795309   31212 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0728 16:02:44.817333   31212 out.go:177] * Verifying Kubernetes components...
	I0728 16:02:44.795312   31212 addons.go:412] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0728 16:02:44.795464   31212 config.go:178] Loaded profile config "newest-cni-20220728160133-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 16:02:44.873731   31212 start.go:789] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0728 16:02:44.875008   31212 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220728160133-12923"
	I0728 16:02:44.875021   31212 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220728160133-12923"
	I0728 16:02:44.875027   31212 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220728160133-12923"
	I0728 16:02:44.875027   31212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 16:02:44.875033   31212 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220728160133-12923"
	W0728 16:02:44.875037   31212 addons.go:162] addon storage-provisioner should already be in state true
	I0728 16:02:44.875037   31212 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220728160133-12923"
	W0728 16:02:44.875057   31212 addons.go:162] addon metrics-server should already be in state true
	I0728 16:02:44.875061   31212 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220728160133-12923"
	I0728 16:02:44.875049   31212 addons.go:65] Setting dashboard=true in profile "newest-cni-20220728160133-12923"
	I0728 16:02:44.875101   31212 host.go:66] Checking if "newest-cni-20220728160133-12923" exists ...
	I0728 16:02:44.875122   31212 host.go:66] Checking if "newest-cni-20220728160133-12923" exists ...
	I0728 16:02:44.875135   31212 addons.go:153] Setting addon dashboard=true in "newest-cni-20220728160133-12923"
	W0728 16:02:44.875145   31212 addons.go:162] addon dashboard should already be in state true
	I0728 16:02:44.875186   31212 host.go:66] Checking if "newest-cni-20220728160133-12923" exists ...
	I0728 16:02:44.875380   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:44.875540   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:44.876287   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:44.877471   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:44.891602   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:44.978222   31212 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220728160133-12923"
	W0728 16:02:45.038151   31212 addons.go:162] addon default-storageclass should already be in state true
	I0728 16:02:44.995981   31212 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 16:02:45.017281   31212 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0728 16:02:45.038109   31212 out.go:177]   - Using image kubernetesui/dashboard:v2.6.0
	I0728 16:02:45.038184   31212 host.go:66] Checking if "newest-cni-20220728160133-12923" exists ...
	I0728 16:02:45.059127   31212 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 16:02:45.117439   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0728 16:02:45.059503   31212 cli_runner.go:164] Run: docker container inspect newest-cni-20220728160133-12923 --format={{.State.Status}}
	I0728 16:02:45.080408   31212 addons.go:345] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0728 16:02:45.114145   31212 api_server.go:51] waiting for apiserver process to appear ...
	I0728 16:02:45.117604   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:45.155111   31212 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0728 16:02:45.155127   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0728 16:02:45.155280   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:45.155287   31212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 16:02:45.176338   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0728 16:02:45.176362   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0728 16:02:45.176483   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:45.190146   31212 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I0728 16:02:45.190172   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0728 16:02:45.190332   31212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220728160133-12923
	I0728 16:02:45.193858   31212 api_server.go:71] duration metric: took 398.53275ms to wait for apiserver process to appear ...
	I0728 16:02:45.193933   31212 api_server.go:87] waiting for apiserver healthz status ...
	I0728 16:02:45.193959   31212 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:60085/healthz ...
	I0728 16:02:45.203508   31212 api_server.go:266] https://127.0.0.1:60085/healthz returned 200:
	ok
	I0728 16:02:45.205814   31212 api_server.go:140] control plane version: v1.24.3
	I0728 16:02:45.205868   31212 api_server.go:130] duration metric: took 11.908253ms to wait for apiserver health ...
	I0728 16:02:45.205895   31212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0728 16:02:45.215719   31212 system_pods.go:59] 8 kube-system pods found
	I0728 16:02:45.215742   31212 system_pods.go:61] "coredns-6d4b75cb6d-prc72" [d43bde09-e312-4bea-952e-daf9ee264c55] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0728 16:02:45.215770   31212 system_pods.go:61] "etcd-newest-cni-20220728160133-12923" [a6bb18a1-1ae7-40d9-a37c-2e5648f96e42] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0728 16:02:45.215782   31212 system_pods.go:61] "kube-apiserver-newest-cni-20220728160133-12923" [b6e1fc8f-718f-493d-b415-4216550c6c10] Running
	I0728 16:02:45.215805   31212 system_pods.go:61] "kube-controller-manager-newest-cni-20220728160133-12923" [4b232ec1-a4c1-4396-a592-cd9890a34eeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0728 16:02:45.215826   31212 system_pods.go:61] "kube-proxy-7jx99" [e05eddca-b40e-419b-b509-2a5b2523c7da] Running
	I0728 16:02:45.215846   31212 system_pods.go:61] "kube-scheduler-newest-cni-20220728160133-12923" [cce41ddb-daea-4fe7-8b7c-609cd384981a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0728 16:02:45.215870   31212 system_pods.go:61] "metrics-server-5c6f97fb75-h4qvh" [43e62268-70c1-4a6e-9411-0ab4fa1c30f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0728 16:02:45.215878   31212 system_pods.go:61] "storage-provisioner" [2dc9b609-a74c-428c-a22d-c26db32aacf1] Running
	I0728 16:02:45.215884   31212 system_pods.go:74] duration metric: took 9.974897ms to wait for pod list to return data ...
	I0728 16:02:45.215894   31212 default_sa.go:34] waiting for default service account to be created ...
	I0728 16:02:45.220763   31212 default_sa.go:45] found service account: "default"
	I0728 16:02:45.220781   31212 default_sa.go:55] duration metric: took 4.880297ms for default service account to be created ...
	I0728 16:02:45.220800   31212 kubeadm.go:572] duration metric: took 425.506612ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0728 16:02:45.220825   31212 node_conditions.go:102] verifying NodePressure condition ...
	I0728 16:02:45.225563   31212 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0728 16:02:45.225578   31212 node_conditions.go:123] node cpu capacity is 6
	I0728 16:02:45.225586   31212 node_conditions.go:105] duration metric: took 4.7554ms to run NodePressure ...
	I0728 16:02:45.225596   31212 start.go:216] waiting for startup goroutines ...
	I0728 16:02:45.248531   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:45.278823   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:45.279030   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:45.289678   31212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60086 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/newest-cni-20220728160133-12923/id_rsa Username:docker}
	I0728 16:02:45.403448   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0728 16:02:45.403464   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0728 16:02:45.406521   31212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0728 16:02:45.408574   31212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0728 16:02:45.408585   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0728 16:02:45.416041   31212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0728 16:02:45.497595   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0728 16:02:45.497609   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0728 16:02:45.498283   31212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0728 16:02:45.498299   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0728 16:02:45.595570   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0728 16:02:45.595587   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0728 16:02:45.600458   31212 addons.go:345] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 16:02:45.600472   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0728 16:02:45.681725   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0728 16:02:45.681760   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0728 16:02:45.706741   31212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0728 16:02:45.782502   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0728 16:02:45.782522   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0728 16:02:45.808701   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0728 16:02:45.808714   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0728 16:02:45.895440   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0728 16:02:45.895454   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0728 16:02:45.920964   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0728 16:02:45.920977   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0728 16:02:46.000259   31212 addons.go:345] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 16:02:46.000277   31212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0728 16:02:46.023418   31212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0728 16:02:46.505026   31212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.098500344s)
	I0728 16:02:46.519148   31212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.103104144s)
	I0728 16:02:46.594877   31212 addons.go:383] Verifying addon metrics-server=true in "newest-cni-20220728160133-12923"
	I0728 16:02:46.690101   31212 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0728 16:02:46.748423   31212 addons.go:414] enableAddons completed in 1.953149328s
	I0728 16:02:46.778888   31212 start.go:506] kubectl: 1.24.2, cluster: 1.24.3 (minor skew: 0)
	I0728 16:02:46.800597   31212 out.go:177] * Done! kubectl is now configured to use "newest-cni-20220728160133-12923" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2022-07-28 23:02:31 UTC, end at Thu 2022-07-28 23:03:31 UTC. --
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.732991345Z" level=info msg="Daemon has completed initialization"
	Jul 28 23:02:33 newest-cni-20220728160133-12923 systemd[1]: Started Docker Application Container Engine.
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.753986251Z" level=info msg="API listen on [::]:2376"
	Jul 28 23:02:33 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:33.759180262Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 28 23:02:46 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:46.095142741Z" level=info msg="ignoring event" container=1073b2237ee09691246c51622bdba357e813d744b05e6208b6c0000ac5b2df93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:02:46 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:46.308467432Z" level=info msg="ignoring event" container=eefc23ae5401939b583510607187252277a522b71242564f606aa8a49ea1f77b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:02:47 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:47.349773236Z" level=info msg="ignoring event" container=2d68668f2766d70399abce9eea04ef9d197aa943011547f0ca937566ca712c9b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:02:47 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:47.351624598Z" level=info msg="ignoring event" container=83c1be755e5d07deb27c2d1db5b9e270ca0c68fd2dd75a91d471c7ed7db439a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:02:48 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:48.264703228Z" level=info msg="ignoring event" container=59d0f547cf6ac37318a1b09fbe3aa45dada60d858ae6a532da391be4d9e4899c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:02:48 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:02:48.301046738Z" level=info msg="ignoring event" container=3a1e8f28a26272ca7b40933b8f13de4c96a9a7d01759446ec572b83925b6b18f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:25 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:25.866292341Z" level=info msg="ignoring event" container=bd1703b60ad8bb63335e7696a1ce34f088a155c44c34c53eec18fcc6d81c6885 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:26 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:26.095501360Z" level=info msg="ignoring event" container=ea35918a4414868ff75cc40ecac67254f01e62157f8d9cb5e84b9a18c50ab047 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:26 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:26.940179499Z" level=info msg="ignoring event" container=233e102665bff33c6efc1cd1c0331920b3577602fccf9c42f8796c5dfc0e0545 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:26 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:26.984202714Z" level=info msg="ignoring event" container=f12bb224e53f7b56b900efee7fd4e5f157bf5fe2548a55ac870f93af155f3409 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:27 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:27.210507335Z" level=info msg="ignoring event" container=49512d8a5eb5e14fd1801a2f644b3836e6e3271ea47be0ad56bb6a11d733bad2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:28 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:28.109514662Z" level=info msg="ignoring event" container=84a852e9212dc08eb61a62d4fbed83f243c89d3eae9e0abe88d06400b2001d40 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:28 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:28.117256220Z" level=info msg="ignoring event" container=6a84a07e828cfa819d1c8d4f2524a35ca3af9e264ac7ab49a733771fe80bc0dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:28 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:28.118841063Z" level=info msg="ignoring event" container=6b4c67b77e494b892d0d36a08ce25a5383d39b2144bc251fddbdcd6b668b8e81 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:28 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:28.150786683Z" level=info msg="ignoring event" container=45ae13091e51e4d38c2dde0cbb5a735e59b5cb5b769e821a2e73f1203ea84790 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:30 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:30.560224257Z" level=info msg="ignoring event" container=cf7244f244224f1596a563851e200300326c96b632f0ee26caaaccdda12330ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:30 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:30.692586703Z" level=info msg="ignoring event" container=c2e720000270bfd413589620bed96417e92b8aa58036e2e69fe143c99cba7587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:30 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:30.694849786Z" level=info msg="ignoring event" container=1c94e8d58e31c0eff6d437427716148eb183b43cf6bd98e15741519644d0f39c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:30 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:30.697542468Z" level=info msg="ignoring event" container=331cf2823aa9b2b71b1d16852b6b9f9c235210a5104fd18537e15bd9a237f00f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:31 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:31.383951801Z" level=info msg="ignoring event" container=ac4e8880ed80609ae2a3ac0242a595bcb3834d3767162dab74264af74fd81cff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 28 23:03:31 newest-cni-20220728160133-12923 dockerd[542]: time="2022-07-28T23:03:31.470306725Z" level=info msg="ignoring event" container=1b75cb5b1b2840940583cbc8d673115f40b2dbb72feff2274078b1c64060ea75 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	8ae7c37eff3f5       6e38f40d628db       46 seconds ago       Running             storage-provisioner       1                   5c92a36ddae3a
	56bcddf7d2fa0       2ae1ba6417cbc       47 seconds ago       Running             kube-proxy                1                   1f55de862e21b
	edc88662680f9       586c112956dfc       52 seconds ago       Running             kube-controller-manager   1                   010af3b0a8368
	3b9280ec90204       aebe758cef4cd       52 seconds ago       Running             etcd                      1                   51816182d04e5
	b788ccc753b1d       3a5aa3a515f5d       52 seconds ago       Running             kube-scheduler            1                   6d7fd19fe7c92
	33001086c552e       d521dd763e2e3       52 seconds ago       Running             kube-apiserver            1                   e026ae869b375
	530b6eb6d7c7a       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   97305fddcc4a5
	89e2d6baa776e       2ae1ba6417cbc       About a minute ago   Exited              kube-proxy                0                   ba52b06cc5320
	5e4060df40544       aebe758cef4cd       About a minute ago   Exited              etcd                      0                   0ffd2a96459ea
	6795e4facb303       586c112956dfc       About a minute ago   Exited              kube-controller-manager   0                   75a84df66ff97
	f41fe719b176d       3a5aa3a515f5d       About a minute ago   Exited              kube-scheduler            0                   09e8e2df77165
	857d35ea3c0ed       d521dd763e2e3       About a minute ago   Exited              kube-apiserver            0                   9624f7dfb492e
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-20220728160133-12923
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-20220728160133-12923
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=363f4186470802814a32480695fe2a353fd5f551
	                    minikube.k8s.io/name=newest-cni-20220728160133-12923
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_07_28T16_02_00_0700
	                    minikube.k8s.io/version=v1.26.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 28 Jul 2022 23:01:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-20220728160133-12923
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 28 Jul 2022 23:03:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 28 Jul 2022 23:03:22 +0000   Thu, 28 Jul 2022 23:01:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 28 Jul 2022 23:03:22 +0000   Thu, 28 Jul 2022 23:01:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 28 Jul 2022 23:03:22 +0000   Thu, 28 Jul 2022 23:01:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 28 Jul 2022 23:03:22 +0000   Thu, 28 Jul 2022 23:03:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    newest-cni-20220728160133-12923
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6086504Ki
	  pods:               110
	System Info:
	  Machine ID:                 855c6c72c86b4657b3d8c3c774fd7e1d
	  System UUID:                0df2c05f-97b9-447d-ba86-6ab9bb1c9e96
	  Boot ID:                    defbf4cf-33ea-4fb3-a1e4-ed0c170fdaf9
	  Kernel Version:             5.10.104-linuxkit
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.17
	  Kubelet Version:            v1.24.3
	  Kube-Proxy Version:         v1.24.3
	PodCIDR:                      192.168.0.0/24
	PodCIDRs:                     192.168.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-prc72                                   100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     78s
	  kube-system                 etcd-newest-cni-20220728160133-12923                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kube-apiserver-newest-cni-20220728160133-12923             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-controller-manager-newest-cni-20220728160133-12923    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-7jx99                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-newest-cni-20220728160133-12923             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 metrics-server-5c6f97fb75-h4qvh                            100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kubernetes-dashboard        dashboard-metrics-scraper-dffd48c4c-vkblv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kubernetes-dashboard        kubernetes-dashboard-5fd5574d9f-f9884                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (6%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 77s                  kube-proxy       
	  Normal  Starting                 46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  103s (x5 over 103s)  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x5 over 103s)  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x4 over 103s)  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasSufficientPID
	  Normal  Starting                 92s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  92s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  92s                  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s                  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             92s                  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeNotReady
	  Normal  NodeReady                82s                  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeReady
	  Normal  RegisteredNode           79s                  node-controller  Node newest-cni-20220728160133-12923 event: Registered Node newest-cni-20220728160133-12923 in Controller
	  Normal  RegisteredNode           11s                  node-controller  Node newest-cni-20220728160133-12923 event: Registered Node newest-cni-20220728160133-12923 in Controller
	  Normal  Starting                 11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s                  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s                  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s                  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             10s                  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                10s                  kubelet          Node newest-cni-20220728160133-12923 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [3b9280ec9020] <==
	* {"level":"info","ts":"2022-07-28T23:02:40.211Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-07-28T23:02:40.212Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-07-28T23:02:40.212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-07-28T23:02:40.212Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-07-28T23:02:40.212Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:02:40.212Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:02:40.213Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-07-28T23:02:40.213Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T23:02:40.213Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-07-28T23:02:40.213Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-07-28T23:02:40.213Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2022-07-28T23:02:41.647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-07-28T23:02:41.649Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20220728160133-12923 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T23:02:41.649Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T23:02:41.649Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T23:02:41.649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T23:02:41.649Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T23:02:41.650Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-07-28T23:02:41.650Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [5e4060df4054] <==
	* {"level":"info","ts":"2022-07-28T23:01:54.538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-20220728160133-12923 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-07-28T23:01:54.538Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:01:54.539Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-07-28T23:01:54.539Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-07-28T23:01:54.539Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-07-28T23:01:54.539Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:01:54.539Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:01:54.540Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-07-28T23:01:54.540Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-07-28T23:02:17.127Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-07-28T23:02:17.127Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"newest-cni-20220728160133-12923","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/07/28 23:02:17 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/07/28 23:02:17 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-07-28T23:02:17.138Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-07-28T23:02:17.139Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T23:02:17.141Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-07-28T23:02:17.141Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"newest-cni-20220728160133-12923","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  23:03:32 up  1:24,  0 users,  load average: 1.43, 1.03, 1.04
	Linux newest-cni-20220728160133-12923 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [33001086c552] <==
	* I0728 23:02:43.422612       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0728 23:02:43.423906       1 cache.go:39] Caches are synced for autoregister controller
	I0728 23:02:43.424245       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0728 23:02:43.427368       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0728 23:02:44.093060       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0728 23:02:44.325674       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0728 23:02:44.458764       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 23:02:44.458854       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0728 23:02:44.458861       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0728 23:02:44.458878       1 handler_proxy.go:102] no RequestInfo found in the context
	E0728 23:02:44.458967       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0728 23:02:44.459921       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0728 23:02:44.733595       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0728 23:02:44.741395       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0728 23:02:44.770106       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0728 23:02:44.781554       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0728 23:02:44.785636       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0728 23:02:45.223115       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0728 23:02:46.587334       1 controller.go:611] quota admission added evaluator for: namespaces
	I0728 23:02:46.649203       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.97.208.140]
	I0728 23:02:46.657361       1 alloc.go:327] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.96.105.19]
	I0728 23:03:21.562447       1 controller.go:611] quota admission added evaluator for: endpoints
	I0728 23:03:21.570846       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0728 23:03:21.767825       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-apiserver [857d35ea3c0e] <==
	* W0728 23:02:26.362674       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.382486       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.419982       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.453233       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.456753       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.553978       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.577606       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.697954       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.721726       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.739554       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.739596       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.804325       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.849579       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.872432       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.884017       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.884110       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.896150       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.941669       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.963266       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:26.983682       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:27.041592       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:27.089614       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:27.098391       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:27.101917       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0728 23:02:27.194138       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-controller-manager [6795e4facb30] <==
	* I0728 23:02:13.204442       1 shared_informer.go:262] Caches are synced for cronjob
	I0728 23:02:13.255893       1 shared_informer.go:262] Caches are synced for endpoint
	I0728 23:02:13.255977       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0728 23:02:13.256175       1 shared_informer.go:262] Caches are synced for persistent volume
	I0728 23:02:13.304298       1 shared_informer.go:262] Caches are synced for taint
	I0728 23:02:13.304479       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0728 23:02:13.304826       1 node_lifecycle_controller.go:1014] Missing timestamp for Node newest-cni-20220728160133-12923. Assuming now as a timestamp.
	I0728 23:02:13.304967       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0728 23:02:13.304643       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0728 23:02:13.304895       1 event.go:294] "Event occurred" object="newest-cni-20220728160133-12923" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-20220728160133-12923 event: Registered Node newest-cni-20220728160133-12923 in Controller"
	I0728 23:02:13.308668       1 shared_informer.go:262] Caches are synced for resource quota
	I0728 23:02:13.312934       1 shared_informer.go:262] Caches are synced for resource quota
	I0728 23:02:13.720318       1 shared_informer.go:262] Caches are synced for garbage collector
	I0728 23:02:13.778055       1 shared_informer.go:262] Caches are synced for garbage collector
	I0728 23:02:13.778090       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0728 23:02:13.812014       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7jx99"
	I0728 23:02:13.858513       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0728 23:02:13.917496       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-6d4b75cb6d to 1"
	I0728 23:02:14.108686       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-prc72"
	I0728 23:02:14.112586       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-fg5bv"
	I0728 23:02:14.125532       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-6d4b75cb6d-fg5bv"
	I0728 23:02:16.429389       1 event.go:294] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-5c6f97fb75 to 1"
	I0728 23:02:16.433402       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-5c6f97fb75-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0728 23:02:16.438677       1 replica_set.go:550] sync "kube-system/metrics-server-5c6f97fb75" failed with pods "metrics-server-5c6f97fb75-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0728 23:02:16.442091       1 event.go:294] "Event occurred" object="kube-system/metrics-server-5c6f97fb75" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-5c6f97fb75-h4qvh"
	
	* 
	* ==> kube-controller-manager [edc88662680f] <==
	* I0728 23:03:21.571665       1 shared_informer.go:262] Caches are synced for attach detach
	I0728 23:03:21.576079       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0728 23:03:21.576369       1 shared_informer.go:262] Caches are synced for node
	I0728 23:03:21.576489       1 range_allocator.go:173] Starting range CIDR allocator
	I0728 23:03:21.576511       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0728 23:03:21.576379       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0728 23:03:21.576577       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0728 23:03:21.577447       1 shared_informer.go:262] Caches are synced for endpoint
	I0728 23:03:21.577797       1 shared_informer.go:262] Caches are synced for PVC protection
	I0728 23:03:21.594382       1 shared_informer.go:262] Caches are synced for stateful set
	I0728 23:03:21.660311       1 shared_informer.go:262] Caches are synced for daemon sets
	I0728 23:03:21.671827       1 shared_informer.go:262] Caches are synced for service account
	I0728 23:03:21.691271       1 shared_informer.go:262] Caches are synced for namespace
	I0728 23:03:21.757951       1 shared_informer.go:262] Caches are synced for deployment
	I0728 23:03:21.761373       1 shared_informer.go:262] Caches are synced for disruption
	I0728 23:03:21.761408       1 disruption.go:371] Sending events to api server.
	I0728 23:03:21.771222       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5fd5574d9f to 1"
	I0728 23:03:21.771259       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-dffd48c4c to 1"
	I0728 23:03:21.779788       1 shared_informer.go:262] Caches are synced for resource quota
	I0728 23:03:21.814461       1 shared_informer.go:262] Caches are synced for resource quota
	I0728 23:03:21.868938       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5fd5574d9f-f9884"
	I0728 23:03:21.868959       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-dffd48c4c-vkblv"
	I0728 23:03:22.194547       1 shared_informer.go:262] Caches are synced for garbage collector
	I0728 23:03:22.194580       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0728 23:03:22.263748       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [56bcddf7d2fa] <==
	* I0728 23:02:45.059312       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0728 23:02:45.059516       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0728 23:02:45.059873       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 23:02:45.214125       1 server_others.go:206] "Using iptables Proxier"
	I0728 23:02:45.214200       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 23:02:45.214213       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 23:02:45.214239       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 23:02:45.214285       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 23:02:45.216569       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 23:02:45.217040       1 server.go:661] "Version info" version="v1.24.3"
	I0728 23:02:45.217296       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 23:02:45.218254       1 config.go:317] "Starting service config controller"
	I0728 23:02:45.219411       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 23:02:45.218726       1 config.go:226] "Starting endpoint slice config controller"
	I0728 23:02:45.219488       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 23:02:45.219567       1 config.go:444] "Starting node config controller"
	I0728 23:02:45.219602       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 23:02:45.320635       1 shared_informer.go:262] Caches are synced for node config
	I0728 23:02:45.320658       1 shared_informer.go:262] Caches are synced for service config
	I0728 23:02:45.320929       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [89e2d6baa776] <==
	* I0728 23:02:14.393528       1 node.go:163] Successfully retrieved node IP: 192.168.67.2
	I0728 23:02:14.393595       1 server_others.go:138] "Detected node IP" address="192.168.67.2"
	I0728 23:02:14.393637       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0728 23:02:14.552552       1 server_others.go:206] "Using iptables Proxier"
	I0728 23:02:14.552608       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0728 23:02:14.552618       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0728 23:02:14.552628       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0728 23:02:14.552659       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 23:02:14.552768       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0728 23:02:14.554065       1 server.go:661] "Version info" version="v1.24.3"
	I0728 23:02:14.554141       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 23:02:14.555129       1 config.go:317] "Starting service config controller"
	I0728 23:02:14.555184       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0728 23:02:14.555203       1 config.go:226] "Starting endpoint slice config controller"
	I0728 23:02:14.555206       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0728 23:02:14.555896       1 config.go:444] "Starting node config controller"
	I0728 23:02:14.555905       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0728 23:02:14.656315       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0728 23:02:14.656352       1 shared_informer.go:262] Caches are synced for service config
	I0728 23:02:14.656468       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b788ccc753b1] <==
	* W0728 23:02:40.206931       1 feature_gate.go:237] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0728 23:02:40.757435       1 serving.go:348] Generated self-signed cert in-memory
	W0728 23:02:43.325546       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0728 23:02:43.325583       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0728 23:02:43.325590       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0728 23:02:43.325595       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0728 23:02:43.335841       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.3"
	I0728 23:02:43.335874       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0728 23:02:43.339632       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0728 23:02:43.339709       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0728 23:02:43.339915       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 23:02:43.339719       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0728 23:02:43.442240       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [f41fe719b176] <==
	* E0728 23:01:56.950695       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0728 23:01:56.950097       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0728 23:01:56.950796       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0728 23:01:56.950176       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0728 23:01:56.950842       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0728 23:01:57.757064       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0728 23:01:57.757215       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0728 23:01:57.863738       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0728 23:01:57.863801       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0728 23:01:57.865849       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0728 23:01:57.865894       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0728 23:01:57.869856       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0728 23:01:57.869887       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0728 23:01:57.877291       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0728 23:01:57.877336       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0728 23:01:57.994729       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0728 23:01:57.994765       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0728 23:01:58.065876       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0728 23:01:58.065961       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0728 23:01:58.094855       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0728 23:01:58.094891       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0728 23:01:58.446860       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0728 23:02:17.206740       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0728 23:02:17.206775       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0728 23:02:17.207992       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2022-07-28 23:02:31 UTC, end at Thu 2022-07-28 23:03:34 UTC. --
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]: E0728 23:03:34.217042    3514 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kubernetes-dashboard-5fd5574d9f-f9884_kubernetes-dashboard(c13ffb45-5be0-4a50-b394-ade05cdcdac9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kubernetes-dashboard-5fd5574d9f-f9884_kubernetes-dashboard(c13ffb45-5be0-4a50-b394-ade05cdcdac9)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"762cef4c9f956dd744c1318585f775d632b2c7b6aec00b900638efe23da4e0da\\\" network for pod \\\"kubernetes-dashboard-5fd5574d9f-f9884\\\": networkPlugin cni failed to set up pod \\\"kubernetes-dashboard-5fd5574d9f-f9884_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"762cef4c9f956dd744c1318585f775d632b2c7b6aec00b900638efe23da4e0da\\\" network for pod \\\"kubernetes-dashboard-5fd
5574d9f-f9884\\\": networkPlugin cni failed to teardown pod \\\"kubernetes-dashboard-5fd5574d9f-f9884_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.31 -j CNI-3339f6b7610c252b89f22bf6 -m comment --comment name: \\\"crio\\\" id: \\\"762cef4c9f956dd744c1318585f775d632b2c7b6aec00b900638efe23da4e0da\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-3339f6b7610c252b89f22bf6':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/kubernetes-dashboard-5fd5574d9f-f9884" podUID=c13ffb45-5be0-4a50-b394-ade05cdcdac9
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]: E0728 23:03:34.499007    3514 remote_runtime.go:212] "RunPodSandbox from runtime service failed" err=<
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:         rpc error: code = Unknown desc = [failed to set up sandbox container "229a8ae10847a9df96a249c67f167d086d794cdbae002d15f63c5acf70cc284a" network for pod "dashboard-metrics-scraper-dffd48c4c-vkblv": networkPlugin cni failed to set up pod "dashboard-metrics-scraper-dffd48c4c-vkblv_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "229a8ae10847a9df96a249c67f167d086d794cdbae002d15f63c5acf70cc284a" network for pod "dashboard-metrics-scraper-dffd48c4c-vkblv": networkPlugin cni failed to teardown pod "dashboard-metrics-scraper-dffd48c4c-vkblv_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.32 -j CNI-802452d40573c3c19b18ba2f -m comment --comment name: "crio" id: "229a8ae10847a9df96a249c67f167d086d794cdbae002d15f63c5acf70cc284a" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target
`CNI-802452d40573c3c19b18ba2f':No such file or directory
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:         
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:         ]
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:  >
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]: E0728 23:03:34.499053    3514 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=<
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:         rpc error: code = Unknown desc = [failed to set up sandbox container "229a8ae10847a9df96a249c67f167d086d794cdbae002d15f63c5acf70cc284a" network for pod "dashboard-metrics-scraper-dffd48c4c-vkblv": networkPlugin cni failed to set up pod "dashboard-metrics-scraper-dffd48c4c-vkblv_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "229a8ae10847a9df96a249c67f167d086d794cdbae002d15f63c5acf70cc284a" network for pod "dashboard-metrics-scraper-dffd48c4c-vkblv": networkPlugin cni failed to teardown pod "dashboard-metrics-scraper-dffd48c4c-vkblv_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.32 -j CNI-802452d40573c3c19b18ba2f -m comment --comment name: "crio" id: "229a8ae10847a9df96a249c67f167d086d794cdbae002d15f63c5acf70cc284a" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target
`CNI-802452d40573c3c19b18ba2f':No such file or directory
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:         
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:         ]
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:  > pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-vkblv"
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]: E0728 23:03:34.499071    3514 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err=<
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:         rpc error: code = Unknown desc = [failed to set up sandbox container "229a8ae10847a9df96a249c67f167d086d794cdbae002d15f63c5acf70cc284a" network for pod "dashboard-metrics-scraper-dffd48c4c-vkblv": networkPlugin cni failed to set up pod "dashboard-metrics-scraper-dffd48c4c-vkblv_kubernetes-dashboard" network: failed to set bridge addr: could not add IP address to "cni0": permission denied, failed to clean up sandbox container "229a8ae10847a9df96a249c67f167d086d794cdbae002d15f63c5acf70cc284a" network for pod "dashboard-metrics-scraper-dffd48c4c-vkblv": networkPlugin cni failed to teardown pod "dashboard-metrics-scraper-dffd48c4c-vkblv_kubernetes-dashboard" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.32 -j CNI-802452d40573c3c19b18ba2f -m comment --comment name: "crio" id: "229a8ae10847a9df96a249c67f167d086d794cdbae002d15f63c5acf70cc284a" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target
`CNI-802452d40573c3c19b18ba2f':No such file or directory
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:         
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:         Try `iptables -h' or 'iptables --help' for more information.
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:         ]
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]:  > pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-vkblv"
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]: E0728 23:03:34.499672    3514 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"dashboard-metrics-scraper-dffd48c4c-vkblv_kubernetes-dashboard(ef42a51e-e374-4829-a27d-d04c4d3f2bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"dashboard-metrics-scraper-dffd48c4c-vkblv_kubernetes-dashboard(ef42a51e-e374-4829-a27d-d04c4d3f2bf9)\\\": rpc error: code = Unknown desc = [failed to set up sandbox container \\\"229a8ae10847a9df96a249c67f167d086d794cdbae002d15f63c5acf70cc284a\\\" network for pod \\\"dashboard-metrics-scraper-dffd48c4c-vkblv\\\": networkPlugin cni failed to set up pod \\\"dashboard-metrics-scraper-dffd48c4c-vkblv_kubernetes-dashboard\\\" network: failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied, failed to clean up sandbox container \\\"229a8ae10847a9df96a249c67f167d086d794cdbae002d15f63c5acf70cc284a\\\" network for pod \\\"dashboar
d-metrics-scraper-dffd48c4c-vkblv\\\": networkPlugin cni failed to teardown pod \\\"dashboard-metrics-scraper-dffd48c4c-vkblv_kubernetes-dashboard\\\" network: running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.32 -j CNI-802452d40573c3c19b18ba2f -m comment --comment name: \\\"crio\\\" id: \\\"229a8ae10847a9df96a249c67f167d086d794cdbae002d15f63c5acf70cc284a\\\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-802452d40573c3c19b18ba2f':No such file or directory\\n\\nTry `iptables -h' or 'iptables --help' for more information.\\n]\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-dffd48c4c-vkblv" podUID=ef42a51e-e374-4829-a27d-d04c4d3f2bf9
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]: I0728 23:03:34.508016    3514 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="32bdfec01371e7458317bd765399bbf72d75d33b2478c49bd45987cb50ca097e"
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]: I0728 23:03:34.515625    3514 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f33f482cca44d9bd4b3b09e83a06988498c606f8bb90b34e8ab07bd779318c5b"
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]: I0728 23:03:34.515647    3514 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="762cef4c9f956dd744c1318585f775d632b2c7b6aec00b900638efe23da4e0da"
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]: I0728 23:03:34.535879    3514 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="05a2ca58fa225c166a5b6aba05a38b442ea9cee0df48315f0844acfbf96c681a"
	Jul 28 23:03:34 newest-cni-20220728160133-12923 kubelet[3514]: I0728 23:03:34.545866    3514 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="d1def6586de6401b9e89403fcc4b399cfadb17e1a0c5f9eca3d77a74c663aaf7"
	
	* 
	* ==> storage-provisioner [530b6eb6d7c7] <==
	* I0728 23:02:16.179661       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 23:02:16.189621       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 23:02:16.189732       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 23:02:16.198129       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 23:02:16.198212       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca1f46b7-9e6c-4222-b687-d530a192b3a5", APIVersion:"v1", ResourceVersion:"383", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220728160133-12923_a9ebf106-3c81-4a79-892c-578d732a17e8 became leader
	I0728 23:02:16.198330       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220728160133-12923_a9ebf106-3c81-4a79-892c-578d732a17e8!
	I0728 23:02:16.299644       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220728160133-12923_a9ebf106-3c81-4a79-892c-578d732a17e8!
	
	* 
	* ==> storage-provisioner [8ae7c37eff3f] <==
	* I0728 23:02:46.103712       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0728 23:02:46.180010       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0728 23:02:46.180138       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0728 23:03:21.564963       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0728 23:03:21.565309       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_newest-cni-20220728160133-12923_199b279e-58c3-4ae1-9d38-ee57a26c704b!
	I0728 23:03:21.565296       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ca1f46b7-9e6c-4222-b687-d530a192b3a5", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' newest-cni-20220728160133-12923_199b279e-58c3-4ae1-9d38-ee57a26c704b became leader
	I0728 23:03:21.666061       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_newest-cni-20220728160133-12923_199b279e-58c3-4ae1-9d38-ee57a26c704b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-20220728160133-12923 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-6d4b75cb6d-prc72 metrics-server-5c6f97fb75-h4qvh dashboard-metrics-scraper-dffd48c4c-vkblv kubernetes-dashboard-5fd5574d9f-f9884
helpers_test.go:272: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context newest-cni-20220728160133-12923 describe pod coredns-6d4b75cb6d-prc72 metrics-server-5c6f97fb75-h4qvh dashboard-metrics-scraper-dffd48c4c-vkblv kubernetes-dashboard-5fd5574d9f-f9884

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context newest-cni-20220728160133-12923 describe pod coredns-6d4b75cb6d-prc72 metrics-server-5c6f97fb75-h4qvh dashboard-metrics-scraper-dffd48c4c-vkblv kubernetes-dashboard-5fd5574d9f-f9884: exit status 1 (240.197553ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-6d4b75cb6d-prc72" not found
	Error from server (NotFound): pods "metrics-server-5c6f97fb75-h4qvh" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-dffd48c4c-vkblv" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5fd5574d9f-f9884" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context newest-cni-20220728160133-12923 describe pod coredns-6d4b75cb6d-prc72 metrics-server-5c6f97fb75-h4qvh dashboard-metrics-scraper-dffd48c4c-vkblv kubernetes-dashboard-5fd5574d9f-f9884: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (48.60s)

                                                
                                    

Test pass (247/289)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 78.58
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.35
10 TestDownloadOnly/v1.24.3/json-events 8.31
11 TestDownloadOnly/v1.24.3/preload-exists 0
14 TestDownloadOnly/v1.24.3/kubectl 0
15 TestDownloadOnly/v1.24.3/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.73
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.43
18 TestDownloadOnlyKic 7.88
19 TestBinaryMirror 1.68
20 TestOffline 50.31
22 TestAddons/Setup 149.26
26 TestAddons/parallel/MetricsServer 5.56
27 TestAddons/parallel/HelmTiller 11.17
29 TestAddons/parallel/CSI 62.69
30 TestAddons/parallel/Headlamp 9.28
32 TestAddons/serial/GCPAuth 17.24
33 TestAddons/StoppedEnableDisable 12.95
34 TestCertOptions 32.37
35 TestCertExpiration 244.12
36 TestDockerFlags 33.92
37 TestForceSystemdFlag 32.15
38 TestForceSystemdEnv 31.99
40 TestHyperKitDriverInstallOrUpdate 7.23
43 TestErrorSpam/setup 27.05
44 TestErrorSpam/start 2.28
45 TestErrorSpam/status 1.28
46 TestErrorSpam/pause 1.79
47 TestErrorSpam/unpause 1.88
48 TestErrorSpam/stop 13.07
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 42.01
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 45.54
55 TestFunctional/serial/KubeContext 0.03
56 TestFunctional/serial/KubectlGetPods 1.44
59 TestFunctional/serial/CacheCmd/cache/add_remote 5.09
60 TestFunctional/serial/CacheCmd/cache/add_local 1.89
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
62 TestFunctional/serial/CacheCmd/cache/list 0.07
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.44
64 TestFunctional/serial/CacheCmd/cache/cache_reload 2.49
65 TestFunctional/serial/CacheCmd/cache/delete 0.15
66 TestFunctional/serial/MinikubeKubectlCmd 0.5
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.63
68 TestFunctional/serial/ExtraConfig 56.12
69 TestFunctional/serial/ComponentHealth 0.05
70 TestFunctional/serial/LogsCmd 3.05
71 TestFunctional/serial/LogsFileCmd 3.14
73 TestFunctional/parallel/ConfigCmd 0.46
75 TestFunctional/parallel/DryRun 1.53
76 TestFunctional/parallel/InternationalLanguage 0.59
77 TestFunctional/parallel/StatusCmd 1.75
80 TestFunctional/parallel/ServiceCmd 15.28
82 TestFunctional/parallel/AddonsCmd 0.27
83 TestFunctional/parallel/PersistentVolumeClaim 25.3
85 TestFunctional/parallel/SSHCmd 0.95
86 TestFunctional/parallel/CpCmd 1.64
87 TestFunctional/parallel/MySQL 25.34
88 TestFunctional/parallel/FileSync 0.44
89 TestFunctional/parallel/CertSync 2.66
93 TestFunctional/parallel/NodeLabels 0.05
95 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
97 TestFunctional/parallel/Version/short 0.09
98 TestFunctional/parallel/Version/components 0.75
99 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
100 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
101 TestFunctional/parallel/ImageCommands/ImageListJson 0.39
102 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
103 TestFunctional/parallel/ImageCommands/ImageBuild 3.99
104 TestFunctional/parallel/ImageCommands/Setup 1.98
105 TestFunctional/parallel/DockerEnv/bash 1.72
106 TestFunctional/parallel/UpdateContextCmd/no_changes 0.37
107 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.4
108 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
109 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.3
110 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.32
111 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.15
112 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.72
113 TestFunctional/parallel/ImageCommands/ImageRemove 0.89
114 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.14
115 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.47
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.63
117 TestFunctional/parallel/ProfileCmd/profile_list 0.53
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.6
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.18
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/MountCmd/any-port 8.83
130 TestFunctional/parallel/MountCmd/specific-port 2.58
131 TestFunctional/delete_addon-resizer_images 0.16
132 TestFunctional/delete_my-image_image 0.07
133 TestFunctional/delete_minikube_cached_images 0.06
143 TestJSONOutput/start/Command 80.93
144 TestJSONOutput/start/Audit 0
146 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/pause/Command 0.66
150 TestJSONOutput/pause/Audit 0
152 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/unpause/Command 0.62
156 TestJSONOutput/unpause/Audit 0
158 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/stop/Command 12.3
162 TestJSONOutput/stop/Audit 0
164 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
166 TestErrorJSONOutput 0.77
168 TestKicCustomNetwork/create_custom_network 29.73
169 TestKicCustomNetwork/use_default_bridge_network 29.08
170 TestKicExistingNetwork 29.72
171 TestKicCustomSubnet 30.03
172 TestMainNoArgs 0.07
173 TestMinikubeProfile 61.78
176 TestMountStart/serial/StartWithMountFirst 7.39
177 TestMountStart/serial/VerifyMountFirst 0.42
178 TestMountStart/serial/StartWithMountSecond 7.32
179 TestMountStart/serial/VerifyMountSecond 0.42
180 TestMountStart/serial/DeleteFirst 2.22
181 TestMountStart/serial/VerifyMountPostDelete 0.41
182 TestMountStart/serial/Stop 1.64
183 TestMountStart/serial/RestartStopped 5.22
184 TestMountStart/serial/VerifyMountPostStop 0.42
187 TestMultiNode/serial/FreshStart2Nodes 117.09
188 TestMultiNode/serial/DeployApp2Nodes 6.51
189 TestMultiNode/serial/PingHostFrom2Pods 0.88
190 TestMultiNode/serial/AddNode 34.6
191 TestMultiNode/serial/ProfileList 0.5
192 TestMultiNode/serial/CopyFile 15.44
193 TestMultiNode/serial/StopNode 14.05
194 TestMultiNode/serial/StartAfterStop 19.26
196 TestMultiNode/serial/DeleteNode 7.98
197 TestMultiNode/serial/StopMultiNode 25.09
198 TestMultiNode/serial/RestartMultiNode 76.93
199 TestMultiNode/serial/ValidateNameConflict 30.64
205 TestScheduledStopUnix 100.76
206 TestSkaffold 61.08
208 TestInsufficientStorage 12.63
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.49
225 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.34
226 TestStoppedBinaryUpgrade/Setup 0.93
228 TestStoppedBinaryUpgrade/MinikubeLogs 3.54
230 TestPause/serial/Start 42.84
231 TestPause/serial/SecondStartNoReconfiguration 34.42
232 TestPause/serial/Pause 0.75
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.34
243 TestNoKubernetes/serial/StartWithK8s 29.36
244 TestNoKubernetes/serial/StartWithStopK8s 17.18
245 TestNoKubernetes/serial/Start 6.69
246 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
247 TestNoKubernetes/serial/ProfileList 4.24
248 TestNoKubernetes/serial/Stop 1.66
249 TestNoKubernetes/serial/StartNoArgs 5.85
250 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.41
251 TestNetworkPlugins/group/auto/Start 46.31
252 TestNetworkPlugins/group/kindnet/Start 50.66
253 TestNetworkPlugins/group/auto/KubeletFlags 0.43
254 TestNetworkPlugins/group/auto/NetCatPod 11.76
255 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
256 TestNetworkPlugins/group/kindnet/KubeletFlags 0.43
257 TestNetworkPlugins/group/kindnet/NetCatPod 12.63
258 TestNetworkPlugins/group/auto/DNS 0.12
259 TestNetworkPlugins/group/auto/Localhost 0.13
260 TestNetworkPlugins/group/auto/HairPin 5.11
261 TestNetworkPlugins/group/cilium/Start 78.21
262 TestNetworkPlugins/group/kindnet/DNS 0.13
263 TestNetworkPlugins/group/kindnet/Localhost 0.15
264 TestNetworkPlugins/group/kindnet/HairPin 0.14
265 TestNetworkPlugins/group/calico/Start 73.65
266 TestNetworkPlugins/group/cilium/ControllerPod 5.02
267 TestNetworkPlugins/group/calico/ControllerPod 5.02
268 TestNetworkPlugins/group/cilium/KubeletFlags 0.44
269 TestNetworkPlugins/group/cilium/NetCatPod 13.09
270 TestNetworkPlugins/group/calico/KubeletFlags 0.46
271 TestNetworkPlugins/group/calico/NetCatPod 11.76
272 TestNetworkPlugins/group/cilium/DNS 0.13
273 TestNetworkPlugins/group/cilium/Localhost 0.12
274 TestNetworkPlugins/group/cilium/HairPin 0.11
275 TestNetworkPlugins/group/calico/DNS 0.12
276 TestNetworkPlugins/group/calico/Localhost 0.11
277 TestNetworkPlugins/group/calico/HairPin 0.11
278 TestNetworkPlugins/group/false/Start 81.82
279 TestNetworkPlugins/group/bridge/Start 43.96
280 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
281 TestNetworkPlugins/group/bridge/NetCatPod 11.6
282 TestNetworkPlugins/group/bridge/DNS 0.12
283 TestNetworkPlugins/group/bridge/Localhost 0.1
284 TestNetworkPlugins/group/bridge/HairPin 0.1
285 TestNetworkPlugins/group/enable-default-cni/Start 43.7
286 TestNetworkPlugins/group/false/KubeletFlags 0.46
287 TestNetworkPlugins/group/false/NetCatPod 12.56
288 TestNetworkPlugins/group/false/DNS 0.12
289 TestNetworkPlugins/group/false/Localhost 0.11
290 TestNetworkPlugins/group/false/HairPin 5.11
291 TestNetworkPlugins/group/kubenet/Start 45.9
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.51
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.8
294 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
295 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
296 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
299 TestNetworkPlugins/group/kubenet/KubeletFlags 0.43
300 TestNetworkPlugins/group/kubenet/NetCatPod 11.59
301 TestNetworkPlugins/group/kubenet/DNS 0.12
302 TestNetworkPlugins/group/kubenet/Localhost 0.11
305 TestStartStop/group/no-preload/serial/FirstStart 50.93
306 TestStartStop/group/no-preload/serial/DeployApp 9.76
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.76
308 TestStartStop/group/no-preload/serial/Stop 12.47
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.37
310 TestStartStop/group/no-preload/serial/SecondStart 302.57
313 TestStartStop/group/old-k8s-version/serial/Stop 1.62
314 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.38
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.02
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.73
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.46
321 TestStartStop/group/embed-certs/serial/FirstStart 45.07
322 TestStartStop/group/embed-certs/serial/DeployApp 10.81
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.75
324 TestStartStop/group/embed-certs/serial/Stop 12.5
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.37
326 TestStartStop/group/embed-certs/serial/SecondStart 299.93
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.01
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.56
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.45
333 TestStartStop/group/default-k8s-different-port/serial/FirstStart 46.15
334 TestStartStop/group/default-k8s-different-port/serial/DeployApp 10.7
335 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.81
336 TestStartStop/group/default-k8s-different-port/serial/Stop 12.57
337 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.38
338 TestStartStop/group/default-k8s-different-port/serial/SecondStart 303.1
339 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.56
341 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.45
344 TestStartStop/group/newest-cni/serial/FirstStart 42.03
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.84
348 TestStartStop/group/newest-cni/serial/Stop 12.49
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
350 TestStartStop/group/newest-cni/serial/SecondStart 17.85
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.46
x
+
TestDownloadOnly/v1.16.0/json-events (78.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220728143805-12923 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220728143805-12923 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (1m18.580444097s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (78.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220728143805-12923
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220728143805-12923: exit status 85 (345.665654ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| Command |                Args                |              Profile               |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | download-only-20220728143805-12923 | jenkins | v1.26.0 | 28 Jul 22 14:38 PDT |          |
	|         | download-only-20220728143805-12923 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 14:38:05
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 14:38:05.846772   12925 out.go:296] Setting OutFile to fd 1 ...
	I0728 14:38:05.846929   12925 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:38:05.846934   12925 out.go:309] Setting ErrFile to fd 2...
	I0728 14:38:05.846938   12925 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:38:05.847986   12925 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	W0728 14:38:05.848256   12925 root.go:310] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/config/config.json: no such file or directory
	I0728 14:38:05.848978   12925 out.go:303] Setting JSON to true
	I0728 14:38:05.864408   12925 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5327,"bootTime":1659038958,"procs":357,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 14:38:05.864500   12925 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 14:38:05.890376   12925 out.go:97] [download-only-20220728143805-12923] minikube v1.26.0 on Darwin 12.5
	I0728 14:38:05.890564   12925 notify.go:193] Checking for updates...
	W0728 14:38:05.890569   12925 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball: no such file or directory
	I0728 14:38:05.912120   12925 out.go:169] MINIKUBE_LOCATION=14555
	I0728 14:38:05.933301   12925 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 14:38:05.957575   12925 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 14:38:05.979344   12925 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 14:38:06.001536   12925 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	W0728 14:38:06.045100   12925 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0728 14:38:06.045450   12925 driver.go:365] Setting default libvirt URI to qemu:///system
	W0728 14:39:05.239755   12925 docker.go:113] docker version returned error: deadline exceeded running "docker version --format {{.Server.Os}}-{{.Server.Version}}": signal: killed
	I0728 14:39:05.261961   12925 out.go:97] Using the docker driver based on user configuration
	I0728 14:39:05.261992   12925 start.go:284] selected driver: docker
	I0728 14:39:05.262001   12925 start.go:808] validating driver "docker" against <nil>
	I0728 14:39:05.262151   12925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 14:39:05.391696   12925 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib
/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 14:39:05.413615   12925 out.go:169] - Ensure your docker daemon has access to enough CPU/memory resources.
	I0728 14:39:05.434546   12925 out.go:169] - Docs https://docs.docker.com/docker-for-mac/#resources
	I0728 14:39:05.476579   12925 out.go:169] 
	W0728 14:39:05.497511   12925 out_reason.go:110] Requested cpu count 2 is greater than the available cpus of 0
	I0728 14:39:05.518537   12925 out.go:169] 
	I0728 14:39:05.560428   12925 out.go:169] 
	W0728 14:39:05.581521   12925 out_reason.go:110] Docker Desktop has less than 2 CPUs configured, but Kubernetes requires at least 2 to be available
	W0728 14:39:05.581654   12925 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "CPUs" slider bar to 2 or higher
	    5. Click "Apply & Restart"
	W0728 14:39:05.581727   12925 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0728 14:39:05.602547   12925 out.go:169] 
	I0728 14:39:05.623527   12925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 14:39:05.748639   12925 info.go:265] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib
/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0728 14:39:05.769565   12925 out.go:272] docker is currently using the  storage driver, consider switching to overlay2 for better performance
	I0728 14:39:05.769634   12925 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0728 14:39:05.813415   12925 out.go:169] 
	W0728 14:39:05.834691   12925 out_reason.go:110] Docker Desktop only has 0MiB available, less than the required 1800MiB for Kubernetes
	W0728 14:39:05.834797   12925 out_reason.go:110] Suggestion: 
	
	    1. Click on "Docker for Desktop" menu icon
	    2. Click "Preferences"
	    3. Click "Resources"
	    4. Increase "Memory" slider bar to 2.25 GB or higher
	    5. Click "Apply & Restart"
	W0728 14:39:05.834833   12925 out_reason.go:110] Documentation: https://docs.docker.com/docker-for-mac/#resources
	I0728 14:39:05.855528   12925 out.go:169] 
	I0728 14:39:05.897540   12925 out.go:169] 
	W0728 14:39:05.918658   12925 out_reason.go:110] docker only has 0MiB available, less than the required 1800MiB for Kubernetes
	I0728 14:39:05.939399   12925 out.go:169] 
	I0728 14:39:05.960559   12925 start_flags.go:377] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0728 14:39:05.960674   12925 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0728 14:39:05.981549   12925 out.go:169] Using Docker Desktop driver with root privileges
	I0728 14:39:06.002409   12925 cni.go:95] Creating CNI manager for ""
	I0728 14:39:06.002429   12925 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 14:39:06.002439   12925 start_flags.go:310] config:
	{Name:download-only-20220728143805-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220728143805-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 14:39:06.023514   12925 out.go:97] Starting control plane node download-only-20220728143805-12923 in cluster download-only-20220728143805-12923
	I0728 14:39:06.023542   12925 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 14:39:06.044541   12925 out.go:97] Pulling base image ...
	I0728 14:39:06.044574   12925 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 14:39:06.044621   12925 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 14:39:06.044719   12925 cache.go:107] acquiring lock: {Name:mk0da31677c1f852bd2b798b5bd1fb1d4b8c33a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 14:39:06.044713   12925 cache.go:107] acquiring lock: {Name:mk4cfa58941623462941014d80d5787a0643aa15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 14:39:06.044790   12925 cache.go:107] acquiring lock: {Name:mk371ff6c3fd8781a9f966f9fa274719e0b1108c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 14:39:06.044838   12925 cache.go:107] acquiring lock: {Name:mk6a86e60aeb794c47643179098158b988c232ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 14:39:06.044820   12925 cache.go:107] acquiring lock: {Name:mk6150e14fbc09b377c0e2ffb3fc243f09c00524 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 14:39:06.045783   12925 cache.go:107] acquiring lock: {Name:mkc1881563fc6a43c0b4667d440bc039e186d159 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 14:39:06.045860   12925 cache.go:107] acquiring lock: {Name:mke3dc526a49f2ad41ce1d01fcac2f8e805ca446 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 14:39:06.045751   12925 cache.go:107] acquiring lock: {Name:mk71457b03f2f483fbef38e98407fe7fae1dc9d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0728 14:39:06.046207   12925 profile.go:148] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/download-only-20220728143805-12923/config.json ...
	I0728 14:39:06.046353   12925 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0728 14:39:06.046695   12925 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0728 14:39:06.046717   12925 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0728 14:39:06.046699   12925 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0728 14:39:06.046697   12925 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0728 14:39:06.046724   12925 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0728 14:39:06.046745   12925 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0728 14:39:06.046796   12925 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0728 14:39:06.046712   12925 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/download-only-20220728143805-12923/config.json: {Name:mk97c2a1e76c7bc835e72c39ee957e6f53352fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0728 14:39:06.047253   12925 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0728 14:39:06.047683   12925 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/linux/amd64/v1.16.0/kubeadm
	I0728 14:39:06.047685   12925 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/linux/amd64/v1.16.0/kubectl
	I0728 14:39:06.047686   12925 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/linux/amd64/v1.16.0/kubelet
	I0728 14:39:06.051869   12925 image.go:177] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0728 14:39:06.053525   12925 image.go:177] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0728 14:39:06.053998   12925 image.go:177] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0728 14:39:06.054684   12925 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0728 14:39:06.054859   12925 image.go:177] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0728 14:39:06.054958   12925 image.go:177] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0728 14:39:06.055068   12925 image.go:177] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0728 14:39:06.055137   12925 image.go:177] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: no such file or directory
	I0728 14:39:06.106944   12925 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 to local cache
	I0728 14:39:06.107120   12925 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local cache directory
	I0728 14:39:06.107242   12925 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 to local cache
	I0728 14:39:06.872289   12925 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0
	I0728 14:39:06.873633   12925 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0
	I0728 14:39:06.882252   12925 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0
	I0728 14:39:06.887807   12925 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0728 14:39:06.891842   12925 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1
	I0728 14:39:06.943182   12925 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0
	I0728 14:39:07.021362   12925 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2
	I0728 14:39:07.042243   12925 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0
	I0728 14:39:07.100413   12925 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 exists
	I0728 14:39:07.100433   12925 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1" took 1.055676337s
	I0728 14:39:07.100443   12925 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.1 succeeded
	I0728 14:39:08.446496   12925 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0728 14:39:08.446514   12925 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.401833968s
	I0728 14:39:08.446524   12925 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0728 14:39:09.385394   12925 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 exists
	I0728 14:39:09.385414   12925 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2" took 3.340632127s
	I0728 14:39:09.385423   12925 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/coredns_1.6.2 succeeded
	I0728 14:39:10.028400   12925 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	I0728 14:39:10.151979   12925 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0728 14:39:10.151995   12925 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0" took 4.107328182s
	I0728 14:39:10.152003   12925 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0728 14:39:10.614212   12925 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0728 14:39:10.614230   12925 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0" took 4.569458828s
	I0728 14:39:10.614238   12925 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0728 14:39:11.080515   12925 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0728 14:39:11.080536   12925 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0" took 5.035783868s
	I0728 14:39:11.080546   12925 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0728 14:39:11.111806   12925 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0728 14:39:11.111823   12925 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0" took 5.067017394s
	I0728 14:39:11.111832   12925 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0728 14:39:12.403875   12925 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 exists
	I0728 14:39:12.403892   12925 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0" took 6.35831041s
	I0728 14:39:12.403902   12925 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0728 14:39:12.403919   12925 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220728143805-12923"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/json-events (8.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220728143805-12923 --force --alsologtostderr --kubernetes-version=v1.24.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220728143805-12923 --force --alsologtostderr --kubernetes-version=v1.24.3 --container-runtime=docker --driver=docker : (8.306044724s)
--- PASS: TestDownloadOnly/v1.24.3/json-events (8.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/preload-exists
--- PASS: TestDownloadOnly/v1.24.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/kubectl
--- PASS: TestDownloadOnly/v1.24.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220728143805-12923
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220728143805-12923: exit status 85 (290.322272ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| Command |                Args                |              Profile               |  User   | Version |     Start Time      | End Time |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only -p         | download-only-20220728143805-12923 | jenkins | v1.26.0 | 28 Jul 22 14:38 PDT |          |
	|         | download-only-20220728143805-12923 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	| start   | -o=json --download-only -p         | download-only-20220728143805-12923 | jenkins | v1.26.0 | 28 Jul 22 14:39 PDT |          |
	|         | download-only-20220728143805-12923 |                                    |         |         |                     |          |
	|         | --force --alsologtostderr          |                                    |         |         |                     |          |
	|         | --kubernetes-version=v1.24.3       |                                    |         |         |                     |          |
	|         | --container-runtime=docker         |                                    |         |         |                     |          |
	|         | --driver=docker                    |                                    |         |         |                     |          |
	|---------|------------------------------------|------------------------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/07/28 14:39:25
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.18.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0728 14:39:25.009831   14458 out.go:296] Setting OutFile to fd 1 ...
	I0728 14:39:25.009998   14458 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:39:25.010004   14458 out.go:309] Setting ErrFile to fd 2...
	I0728 14:39:25.010007   14458 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:39:25.010114   14458 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	W0728 14:39:25.010217   14458 root.go:310] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/config/config.json: no such file or directory
	I0728 14:39:25.010555   14458 out.go:303] Setting JSON to true
	I0728 14:39:25.025460   14458 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5407,"bootTime":1659038958,"procs":354,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 14:39:25.025569   14458 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 14:39:25.047237   14458 out.go:97] [download-only-20220728143805-12923] minikube v1.26.0 on Darwin 12.5
	I0728 14:39:25.047382   14458 notify.go:193] Checking for updates...
	W0728 14:39:25.047448   14458 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball: no such file or directory
	I0728 14:39:25.069944   14458 out.go:169] MINIKUBE_LOCATION=14555
	I0728 14:39:25.092305   14458 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 14:39:25.114089   14458 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 14:39:25.136222   14458 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 14:39:25.158218   14458 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	W0728 14:39:25.201013   14458 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0728 14:39:25.201684   14458 config.go:178] Loaded profile config "download-only-20220728143805-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0728 14:39:25.201761   14458 start.go:716] api.Load failed for download-only-20220728143805-12923: filestore "download-only-20220728143805-12923": Docker machine "download-only-20220728143805-12923" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0728 14:39:25.201836   14458 driver.go:365] Setting default libvirt URI to qemu:///system
	W0728 14:39:25.201868   14458 start.go:716] api.Load failed for download-only-20220728143805-12923: filestore "download-only-20220728143805-12923": Docker machine "download-only-20220728143805-12923" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0728 14:39:25.267674   14458 docker.go:137] docker version: linux-20.10.17
	I0728 14:39:25.267793   14458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 14:39:25.397131   14458 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-07-28 21:39:25.326885296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 14:39:25.419045   14458 out.go:97] Using the docker driver based on existing profile
	I0728 14:39:25.419072   14458 start.go:284] selected driver: docker
	I0728 14:39:25.419077   14458 start.go:808] validating driver "docker" against &{Name:download-only-20220728143805-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220728143805-12923 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 14:39:25.419276   14458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 14:39:25.551069   14458 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:45 SystemTime:2022-07-28 21:39:25.479192045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 14:39:25.553168   14458 cni.go:95] Creating CNI manager for ""
	I0728 14:39:25.553188   14458 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I0728 14:39:25.553200   14458 start_flags.go:310] config:
	{Name:download-only-20220728143805-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:download-only-20220728143805-12923 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 14:39:25.580517   14458 out.go:97] Starting control plane node download-only-20220728143805-12923 in cluster download-only-20220728143805-12923
	I0728 14:39:25.580574   14458 cache.go:120] Beginning downloading kic base image for docker with docker
	I0728 14:39:25.602541   14458 out.go:97] Pulling base image ...
	I0728 14:39:25.602636   14458 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 14:39:25.602755   14458 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local docker daemon
	I0728 14:39:25.664436   14458 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 to local cache
	I0728 14:39:25.664638   14458 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local cache directory
	I0728 14:39:25.664656   14458 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 in local cache directory, skipping pull
	I0728 14:39:25.664661   14458 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 exists in cache, skipping pull
	I0728 14:39:25.664669   14458 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 as a tarball
	I0728 14:39:25.669692   14458 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.3/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	I0728 14:39:25.669708   14458 cache.go:57] Caching tarball of preloaded images
	I0728 14:39:25.669913   14458 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
	I0728 14:39:25.691479   14458 out.go:97] Downloading Kubernetes v1.24.3 preload ...
	I0728 14:39:25.691587   14458 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 ...
	I0728 14:39:25.794152   14458 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.3/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4?checksum=md5:ae1c8e7b1fa116b4699d7551d3812287 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220728143805-12923"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.24.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.73s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.73s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220728143805-12923
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.43s)

                                                
                                    
x
+
TestDownloadOnlyKic (7.88s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220728143935-12923 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220728143935-12923 --force --alsologtostderr --driver=docker : (6.726483126s)
helpers_test.go:175: Cleaning up "download-docker-20220728143935-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220728143935-12923
--- PASS: TestDownloadOnlyKic (7.88s)

                                                
                                    
x
+
TestBinaryMirror (1.68s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220728143943-12923 --alsologtostderr --binary-mirror http://127.0.0.1:54839 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220728143943-12923 --alsologtostderr --binary-mirror http://127.0.0.1:54839 --driver=docker : (1.019413781s)
helpers_test.go:175: Cleaning up "binary-mirror-20220728143943-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220728143943-12923
--- PASS: TestBinaryMirror (1.68s)

                                                
                                    
x
+
TestOffline (50.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220728152330-12923 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220728152330-12923 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (47.514240998s)
helpers_test.go:175: Cleaning up "offline-docker-20220728152330-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220728152330-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220728152330-12923: (2.798957633s)
--- PASS: TestOffline (50.31s)

                                                
                                    
x
+
TestAddons/Setup (149.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220728143944-12923 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220728143944-12923 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m29.258813821s)
--- PASS: TestAddons/Setup (149.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: metrics-server stabilized in 2.073565ms
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-8595bd7d4c-n8vzd" [a1f03f7e-bfe8-4286-ab3c-4f5c5fc467f0] Running
addons_test.go:361: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007801937s
addons_test.go:367: (dbg) Run:  kubectl --context addons-20220728143944-12923 top pods -n kube-system
addons_test.go:384: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220728143944-12923 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.56s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.17s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: tiller-deploy stabilized in 2.230039ms
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-c7d76457b-mmf6p" [621a6996-597a-4ad0-8913-f81f0e6b213b] Running
addons_test.go:410: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008528476s
addons_test.go:425: (dbg) Run:  kubectl --context addons-20220728143944-12923 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:425: (dbg) Done: kubectl --context addons-20220728143944-12923 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.685754145s)
addons_test.go:442: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220728143944-12923 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.17s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:513: csi-hostpath-driver pods stabilized in 4.345495ms
addons_test.go:516: (dbg) Run:  kubectl --context addons-20220728143944-12923 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:516: (dbg) Done: kubectl --context addons-20220728143944-12923 create -f testdata/csi-hostpath-driver/pvc.yaml: (2.716432279s)
addons_test.go:521: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220728143944-12923 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220728143944-12923 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:526: (dbg) Run:  kubectl --context addons-20220728143944-12923 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:531: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [fdf3b81d-0bca-416c-8976-31492248204b] Pending
helpers_test.go:342: "task-pv-pod" [fdf3b81d-0bca-416c-8976-31492248204b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [fdf3b81d-0bca-416c-8976-31492248204b] Running
addons_test.go:531: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 32.007019151s
addons_test.go:536: (dbg) Run:  kubectl --context addons-20220728143944-12923 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:541: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220728143944-12923 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220728143944-12923 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:546: (dbg) Run:  kubectl --context addons-20220728143944-12923 delete pod task-pv-pod
addons_test.go:552: (dbg) Run:  kubectl --context addons-20220728143944-12923 delete pvc hpvc
addons_test.go:558: (dbg) Run:  kubectl --context addons-20220728143944-12923 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:563: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220728143944-12923 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:568: (dbg) Run:  kubectl --context addons-20220728143944-12923 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:573: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [6a81066b-5984-4b00-9558-1fae8cbca979] Pending
helpers_test.go:342: "task-pv-pod-restore" [6a81066b-5984-4b00-9558-1fae8cbca979] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [6a81066b-5984-4b00-9558-1fae8cbca979] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:573: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 16.010879113s
addons_test.go:578: (dbg) Run:  kubectl --context addons-20220728143944-12923 delete pod task-pv-pod-restore
addons_test.go:582: (dbg) Run:  kubectl --context addons-20220728143944-12923 delete pvc hpvc-restore
addons_test.go:586: (dbg) Run:  kubectl --context addons-20220728143944-12923 delete volumesnapshot new-snapshot-demo
addons_test.go:590: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220728143944-12923 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:590: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220728143944-12923 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.78709166s)
addons_test.go:594: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220728143944-12923 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.69s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (9.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-20220728143944-12923 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:737: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-20220728143944-12923 --alsologtostderr -v=1: (1.272347163s)
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-866f5bd7bc-5vxpp" [f3d8cc11-6233-4060-9589-e05a10e4f6c4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:342: "headlamp-866f5bd7bc-5vxpp" [f3d8cc11-6233-4060-9589-e05a10e4f6c4] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:742: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 8.006740757s
--- PASS: TestAddons/parallel/Headlamp (9.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (17.24s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:605: (dbg) Run:  kubectl --context addons-20220728143944-12923 create -f testdata/busybox.yaml
addons_test.go:612: (dbg) Run:  kubectl --context addons-20220728143944-12923 create sa gcp-auth-test
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [77d02ab8-5ff6-44b4-88d9-609deb643a71] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [77d02ab8-5ff6-44b4-88d9-609deb643a71] Running
addons_test.go:618: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 10.006815854s
addons_test.go:624: (dbg) Run:  kubectl --context addons-20220728143944-12923 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:636: (dbg) Run:  kubectl --context addons-20220728143944-12923 describe sa gcp-auth-test
addons_test.go:650: (dbg) Run:  kubectl --context addons-20220728143944-12923 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:674: (dbg) Run:  kubectl --context addons-20220728143944-12923 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:687: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220728143944-12923 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:687: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220728143944-12923 addons disable gcp-auth --alsologtostderr -v=1: (6.64902733s)
--- PASS: TestAddons/serial/GCPAuth (17.24s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.95s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:134: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220728143944-12923
addons_test.go:134: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220728143944-12923: (12.523999402s)
addons_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220728143944-12923
addons_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220728143944-12923
--- PASS: TestAddons/StoppedEnableDisable (12.95s)

                                                
                                    
x
+
TestCertOptions (32.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220728152503-12923 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220728152503-12923 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (28.686961494s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220728152503-12923 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220728152503-12923 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220728152503-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220728152503-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220728152503-12923: (2.74302088s)
--- PASS: TestCertOptions (32.37s)

                                                
                                    
x
+
TestCertExpiration (244.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220728152452-12923 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220728152452-12923 --memory=2048 --cert-expiration=3m --driver=docker : (28.279800582s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220728152452-12923 --memory=2048 --cert-expiration=8760h --driver=docker 
E0728 15:28:25.274650   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
E0728 15:28:45.755923   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220728152452-12923 --memory=2048 --cert-expiration=8760h --driver=docker : (33.074022633s)
helpers_test.go:175: Cleaning up "cert-expiration-20220728152452-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220728152452-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220728152452-12923: (2.766160875s)
--- PASS: TestCertExpiration (244.12s)

                                                
                                    
x
+
TestDockerFlags (33.92s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220728152429-12923 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220728152429-12923 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (30.333508817s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220728152429-12923 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220728152429-12923 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-20220728152429-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220728152429-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220728152429-12923: (2.669163754s)
--- PASS: TestDockerFlags (33.92s)

                                                
                                    
x
+
TestForceSystemdFlag (32.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220728152420-12923 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220728152420-12923 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (28.881976863s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220728152420-12923 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220728152420-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220728152420-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220728152420-12923: (2.769081152s)
--- PASS: TestForceSystemdFlag (32.15s)

                                                
                                    
x
+
TestForceSystemdEnv (31.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220728152357-12923 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220728152357-12923 --memory=2048 --alsologtostderr -v=5 --driver=docker : (28.614776478s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220728152357-12923 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-20220728152357-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220728152357-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220728152357-12923: (2.76973754s)
--- PASS: TestForceSystemdEnv (31.99s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.23s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.23s)

                                                
                                    
x
+
TestErrorSpam/setup (27.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220728144359-12923 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 --driver=docker 
error_spam_test.go:78: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220728144359-12923 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 --driver=docker : (27.054554652s)
--- PASS: TestErrorSpam/setup (27.05s)

                                                
                                    
x
+
TestErrorSpam/start (2.28s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 start --dry-run
--- PASS: TestErrorSpam/start (2.28s)

                                                
                                    
x
+
TestErrorSpam/status (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 status
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 status
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 status
--- PASS: TestErrorSpam/status (1.28s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 pause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 unpause
--- PASS: TestErrorSpam/unpause (1.88s)

                                                
                                    
x
+
TestErrorSpam/stop (13.07s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 stop
error_spam_test.go:156: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 stop: (12.404714761s)
error_spam_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220728144359-12923 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-20220728144359-12923 stop
--- PASS: TestErrorSpam/stop (13.07s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/files/etc/test/nested/copy/12923/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220728144449-12923 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2160: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220728144449-12923 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (42.011930753s)
--- PASS: TestFunctional/serial/StartWithProxy (42.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (45.54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220728144449-12923 --alsologtostderr -v=8
functional_test.go:651: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220728144449-12923 --alsologtostderr -v=8: (45.534503054s)
functional_test.go:655: soft start took 45.53499228s for "functional-20220728144449-12923" cluster.
--- PASS: TestFunctional/serial/SoftStart (45.54s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220728144449-12923 get po -A
functional_test.go:688: (dbg) Done: kubectl --context functional-20220728144449-12923 get po -A: (1.438124944s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 cache add k8s.gcr.io/pause:3.1: (1.258666833s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 cache add k8s.gcr.io/pause:3.3: (1.953963001s)
functional_test.go:1041: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 cache add k8s.gcr.io/pause:latest: (1.876449238s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220728144449-12923 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local149951532/001
functional_test.go:1081: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 cache add minikube-local-cache-test:functional-20220728144449-12923
functional_test.go:1081: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 cache add minikube-local-cache-test:functional-20220728144449-12923: (1.368680126s)
functional_test.go:1086: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 cache delete minikube-local-cache-test:functional-20220728144449-12923
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220728144449-12923
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (412.22205ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 cache reload: (1.192610695s)
functional_test.go:1155: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 kubectl -- --context functional-20220728144449-12923 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220728144449-12923 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.63s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (56.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220728144449-12923 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0728 14:47:13.950098   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:47:13.955837   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:47:13.966682   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:47:13.987374   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:47:14.028088   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:47:14.108715   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:47:14.269281   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:47:14.590036   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:47:15.230394   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:47:16.512635   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:47:19.072851   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:47:24.193105   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
functional_test.go:749: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220728144449-12923 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (56.117731412s)
functional_test.go:753: restart took 56.117851941s for "functional-20220728144449-12923" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (56.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220728144449-12923 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 logs
functional_test.go:1228: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 logs: (3.046150845s)
--- PASS: TestFunctional/serial/LogsCmd (3.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2311319288/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd2311319288/001/logs.txt: (3.135448505s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220728144449-12923 config get cpus: exit status 14 (51.496632ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 config set cpus 2
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 config get cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 config get cpus
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220728144449-12923 config get cpus: exit status 14 (51.85523ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220728144449-12923 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220728144449-12923 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (700.075677ms)

                                                
                                                
-- stdout --
	* [functional-20220728144449-12923] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 14:48:39.961773   16805 out.go:296] Setting OutFile to fd 1 ...
	I0728 14:48:39.961954   16805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:48:39.961960   16805 out.go:309] Setting ErrFile to fd 2...
	I0728 14:48:39.961963   16805 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:48:39.962067   16805 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 14:48:39.962492   16805 out.go:303] Setting JSON to false
	I0728 14:48:39.979036   16805 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5961,"bootTime":1659038958,"procs":349,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 14:48:39.979131   16805 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 14:48:40.021873   16805 out.go:177] * [functional-20220728144449-12923] minikube v1.26.0 on Darwin 12.5
	I0728 14:48:40.063645   16805 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 14:48:40.105873   16805 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 14:48:40.148007   16805 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 14:48:40.190556   16805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 14:48:40.211892   16805 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 14:48:40.233860   16805 config.go:178] Loaded profile config "functional-20220728144449-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 14:48:40.234543   16805 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 14:48:40.303861   16805 docker.go:137] docker version: linux-20.10.17
	I0728 14:48:40.303997   16805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 14:48:40.438126   16805 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-28 21:48:40.376906847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 14:48:40.480127   16805 out.go:177] * Using the docker driver based on existing profile
	I0728 14:48:40.501422   16805 start.go:284] selected driver: docker
	I0728 14:48:40.501448   16805 start.go:808] validating driver "docker" against &{Name:functional-20220728144449-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220728144449-12923 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 14:48:40.501732   16805 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 14:48:40.526080   16805 out.go:177] 
	W0728 14:48:40.547326   16805 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0728 14:48:40.568149   16805 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220728144449-12923 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220728144449-12923 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220728144449-12923 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (593.850303ms)

                                                
                                                
-- stdout --
	* [functional-20220728144449-12923] minikube v1.26.0 sur Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 14:48:28.399397   16548 out.go:296] Setting OutFile to fd 1 ...
	I0728 14:48:28.399543   16548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:48:28.399550   16548 out.go:309] Setting ErrFile to fd 2...
	I0728 14:48:28.399554   16548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 14:48:28.399681   16548 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 14:48:28.400101   16548 out.go:303] Setting JSON to false
	I0728 14:48:28.415646   16548 start.go:115] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5950,"bootTime":1659038958,"procs":348,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.5","kernelVersion":"21.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0728 14:48:28.415746   16548 start.go:123] gopshost.Virtualization returned error: not implemented yet
	I0728 14:48:28.437713   16548 out.go:177] * [functional-20220728144449-12923] minikube v1.26.0 sur Darwin 12.5
	I0728 14:48:28.481755   16548 out.go:177]   - MINIKUBE_LOCATION=14555
	I0728 14:48:28.503689   16548 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	I0728 14:48:28.525777   16548 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0728 14:48:28.547633   16548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0728 14:48:28.569490   16548 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	I0728 14:48:28.592297   16548 config.go:178] Loaded profile config "functional-20220728144449-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 14:48:28.592983   16548 driver.go:365] Setting default libvirt URI to qemu:///system
	I0728 14:48:28.662075   16548 docker.go:137] docker version: linux-20.10.17
	I0728 14:48:28.662208   16548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0728 14:48:28.793254   16548 info.go:265] docker info: {ID:CGEL:7L63:VIPX:OKTD:LALU:QDEW:WYEQ:5UGM:JX3Z:J7LS:3UVW:STXB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:false NGoroutines:51 SystemTime:2022-07-28 21:48:28.726011736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.104-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6232580096 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.17 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1 Expected:10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1} RuncCommit:{ID:v1.1.2-0-ga916309 Expected:v1.1.2-0-ga916309} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defau
lt name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.2] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.7.0] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.8] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0728 14:48:28.835686   16548 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0728 14:48:28.857195   16548 start.go:284] selected driver: docker
	I0728 14:48:28.857219   16548 start.go:808] validating driver "docker" against &{Name:functional-20220728144449-12923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.32-1656700284-14481@sha256:96d18f055abcf72b9f587e13317d6f9b5bb6f60e9fa09d6c51e11defaf9bf842 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:functional-20220728144449-12923 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-pol
icy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0728 14:48:28.857395   16548 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0728 14:48:28.882141   16548 out.go:177] 
	W0728 14:48:28.903902   16548 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0728 14:48:28.924973   16548 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (15.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220728144449-12923 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220728144449-12923 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54c4b5c49f-lr6xw" [fe8feb67-fdfc-461e-b5f3-f90436cc4c15] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-54c4b5c49f-lr6xw" [fe8feb67-fdfc-461e-b5f3-f90436cc4c15] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 8.008871014s
functional_test.go:1448: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1448: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 service list: (1.086505104s)
functional_test.go:1462: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 service --namespace=default --https --url hello-node: (2.025908462s)
functional_test.go:1475: found endpoint: https://127.0.0.1:55726
functional_test.go:1490: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1490: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 service hello-node --url --format={{.IP}}: (2.031091809s)
functional_test.go:1504: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 service hello-node --url: (2.027300637s)
functional_test.go:1510: found endpoint for hello-node: http://127.0.0.1:55746
--- PASS: TestFunctional/parallel/ServiceCmd (15.28s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 addons list
functional_test.go:1631: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [af238451-6e4b-4f33-b989-f129ea25e767] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010353089s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220728144449-12923 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220728144449-12923 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220728144449-12923 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220728144449-12923 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [042a20cc-9cd4-479c-83c6-ce3971243a2f] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [042a20cc-9cd4-479c-83c6-ce3971243a2f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [042a20cc-9cd4-479c-83c6-ce3971243a2f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.007877963s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220728144449-12923 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220728144449-12923 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220728144449-12923 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [58975b1b-5f70-4766-b43c-d227d370e24d] Pending
helpers_test.go:342: "sp-pod" [58975b1b-5f70-4766-b43c-d227d370e24d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:342: "sp-pod" [58975b1b-5f70-4766-b43c-d227d370e24d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.007863251s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220728144449-12923 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.30s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh -n functional-20220728144449-12923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 cp functional-20220728144449-12923:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd344879862/001/cp-test.txt
E0728 14:47:54.917304   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh -n functional-20220728144449-12923 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220728144449-12923 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-nmcfj" [e8ac696c-8a94-447a-bba8-fc4b9b669999] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-67f7d69d8b-nmcfj" [e8ac696c-8a94-447a-bba8-fc4b9b669999] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.013437817s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220728144449-12923 exec mysql-67f7d69d8b-nmcfj -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220728144449-12923 exec mysql-67f7d69d8b-nmcfj -- mysql -ppassword -e "show databases;": exit status 1 (118.841504ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220728144449-12923 exec mysql-67f7d69d8b-nmcfj -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220728144449-12923 exec mysql-67f7d69d8b-nmcfj -- mysql -ppassword -e "show databases;": exit status 1 (109.329877ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220728144449-12923 exec mysql-67f7d69d8b-nmcfj -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220728144449-12923 exec mysql-67f7d69d8b-nmcfj -- mysql -ppassword -e "show databases;": exit status 1 (119.077444ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220728144449-12923 exec mysql-67f7d69d8b-nmcfj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.34s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/12923/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "sudo cat /etc/test/nested/copy/12923/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/12923.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "sudo cat /etc/ssl/certs/12923.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/12923.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "sudo cat /usr/share/ca-certificates/12923.pem"
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/129232.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "sudo cat /etc/ssl/certs/129232.pem"
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/129232.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "sudo cat /usr/share/ca-certificates/129232.pem"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220728144449-12923 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "sudo systemctl is-active crio": exit status 1 (417.85945ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.7
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.24.3
k8s.gcr.io/kube-proxy:v1.24.3
k8s.gcr.io/kube-controller-manager:v1.24.3
k8s.gcr.io/kube-apiserver:v1.24.3
k8s.gcr.io/etcd:3.5.3-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220728144449-12923
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220728144449-12923
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| k8s.gcr.io/pause                            | 3.1                             | da86e6ba6ca19 | 742kB  |
| docker.io/localhost/my-image                | functional-20220728144449-12923 | 009d0a60ded4f | 1.24MB |
| k8s.gcr.io/kube-apiserver                   | v1.24.3                         | d521dd763e2e3 | 130MB  |
| k8s.gcr.io/kube-scheduler                   | v1.24.3                         | 3a5aa3a515f5d | 51MB   |
| k8s.gcr.io/kube-controller-manager          | v1.24.3                         | 586c112956dfc | 119MB  |
| k8s.gcr.io/pause                            | 3.7                             | 221177c6082a8 | 711kB  |
| gcr.io/google-containers/addon-resizer      | functional-20220728144449-12923 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/minikube-local-cache-test | functional-20220728144449-12923 | 661e67c731f01 | 30B    |
| docker.io/library/nginx                     | latest                          | 670dcc86b69df | 142MB  |
| k8s.gcr.io/etcd                             | 3.5.3-0                         | aebe758cef4cd | 299MB  |
| gcr.io/k8s-minikube/busybox                 | latest                          | beae173ccac6a | 1.24MB |
| k8s.gcr.io/pause                            | 3.3                             | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | 56cc512116c8f | 4.4MB  |
| docker.io/library/mysql                     | 5.7                             | 3147495b3a5ce | 431MB  |
| docker.io/library/nginx                     | alpine                          | e46bcc6975310 | 23.5MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | a4ca41631cc7a | 46.8MB |
| k8s.gcr.io/pause                            | 3.6                             | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | 6e38f40d628db | 31.5MB |
| k8s.gcr.io/echoserver                       | 1.8                             | 82e4c8a736a4f | 95.4MB |
| k8s.gcr.io/kube-proxy                       | v1.24.3                         | 2ae1ba6417cbc | 110MB  |
| k8s.gcr.io/pause                            | latest                          | 350b164e7ae1d | 240kB  |
|---------------------------------------------|---------------------------------|---------------|--------|
E0728 14:49:57.799167   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:52:13.962315   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 14:52:41.656294   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls --format json:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"009d0a60ded4f4b5bc60aaa5169d670764d6b14ee2a593e1b5ae7c619cb477fb","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220728144449-12923"],"size":"1240000"},{"id":"670dcc86b69df89a9d5a9e1a7ae5b8f67619c1c74e19de8a35f57d6c06505fd4","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.24.3"],"size":"130000000"},{"id":"3a5aa3a515f5d28b31ac5410cf
aa56ddbbec1c4e88cbdf711db9de6bbf6b00b0","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.24.3"],"size":"51000000"},{"id":"aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.3-0"],"size":"299000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"e46bcc69753105cfd75905056666b92cee0d3e96ebf134b19f1b38de53cda93e","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23500000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"56cc51
2116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220728144449-12923"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"661e67c731f016f838c702801df0f46e7e59a8e904bba793ddeeb312e76a0b9f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220728144449-12923"],"size":"30"},{"id":"3147495b3a5ce957dee2319099a8808c1418e0b0a2c82c9b2396c5fb4b688509","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"431000000"},{"id":"2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.24.3"],"size":"110000000"},{"id":"586c112956dfc2de95aef392cb
fcbfa2b579c332993079ed4d13108ff2409f2f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.24.3"],"size":"119000000"},{"id":"221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.7"],"size":"711000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls --format yaml:
- id: 221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.7
size: "711000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: e46bcc69753105cfd75905056666b92cee0d3e96ebf134b19f1b38de53cda93e
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23500000"
- id: 670dcc86b69df89a9d5a9e1a7ae5b8f67619c1c74e19de8a35f57d6c06505fd4
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 2ae1ba6417cbcd0b381139277508ddbebd0cf055344b710f7ea16e4da954a302
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.24.3
size: "110000000"
- id: aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.3-0
size: "299000000"
- id: 3a5aa3a515f5d28b31ac5410cfaa56ddbbec1c4e88cbdf711db9de6bbf6b00b0
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.24.3
size: "51000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220728144449-12923
size: "32900000"
- id: 661e67c731f016f838c702801df0f46e7e59a8e904bba793ddeeb312e76a0b9f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220728144449-12923
size: "30"
- id: 3147495b3a5ce957dee2319099a8808c1418e0b0a2c82c9b2396c5fb4b688509
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "431000000"
- id: d521dd763e2e345a72534dd1503df3f5a14645ccb3fb0c0dd672fdd6da8853db
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.24.3
size: "130000000"
- id: 586c112956dfc2de95aef392cbfcbfa2b579c332993079ed4d13108ff2409f2f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.24.3
size: "119000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh pgrep buildkitd
functional_test.go:303: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh pgrep buildkitd: exit status 1 (400.009976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image build -t localhost/my-image:functional-20220728144449-12923 testdata/build
functional_test.go:310: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 image build -t localhost/my-image:functional-20220728144449-12923 testdata/build: (3.199304178s)
functional_test.go:315: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220728144449-12923 image build -t localhost/my-image:functional-20220728144449-12923 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 92320a677e3d
Removing intermediate container 92320a677e3d
---> 962bb7e40ff1
Step 3/3 : ADD content.txt /
---> 009d0a60ded4
Successfully built 009d0a60ded4
Successfully tagged localhost/my-image:functional-20220728144449-12923
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.917600254s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220728144449-12923
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:491: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220728144449-12923 docker-env) && out/minikube-darwin-amd64 status -p functional-20220728144449-12923"
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220728144449-12923 docker-env) && docker images"
E0728 14:47:34.433144   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220728144449-12923

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220728144449-12923: (2.929362657s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220728144449-12923

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220728144449-12923: (2.000319505s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:230: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.093976967s)
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220728144449-12923
functional_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220728144449-12923
functional_test.go:240: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220728144449-12923: (3.60130507s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image save gcr.io/google-containers/addon-resizer:functional-20220728144449-12923 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 image save gcr.io/google-containers/addon-resizer:functional-20220728144449-12923 /Users/jenkins/workspace/addon-resizer-save.tar: (1.718809685s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image rm gcr.io/google-containers/addon-resizer:functional-20220728144449-12923
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:404: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.805626178s)
functional_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220728144449-12923
functional_test.go:419: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220728144449-12923

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220728144449-12923 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220728144449-12923: (2.327448526s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220728144449-12923
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1310: Took "434.961562ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1324: Took "94.629145ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: Took "483.322076ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1374: Took "113.592386ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220728144449-12923 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220728144449-12923 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [324e0047-bcb3-4503-b0f8-1ba902c3cfcd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [324e0047-bcb3-4503-b0f8-1ba902c3cfcd] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.010612952s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220728144449-12923 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220728144449-12923 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 16497: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220728144449-12923 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2685036611/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1659044908953169000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2685036611/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1659044908953169000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2685036611/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1659044908953169000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2685036611/001/test-1659044908953169000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (403.7712ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 28 21:48 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 28 21:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 28 21:48 test-1659044908953169000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh cat /mount-9p/test-1659044908953169000

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220728144449-12923 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [cd336173-37bc-4fa1-a215-1044cc988e09] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [cd336173-37bc-4fa1-a215-1044cc988e09] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [cd336173-37bc-4fa1-a215-1044cc988e09] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0728 14:48:35.877369   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [cd336173-37bc-4fa1-a215-1044cc988e09] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.008171148s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220728144449-12923 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220728144449-12923 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2685036611/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220728144449-12923 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2053938696/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (447.383428ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220728144449-12923 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2053938696/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh "sudo umount -f /mount-9p": exit status 1 (547.41315ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-20220728144449-12923 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220728144449-12923 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2053938696/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.58s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220728144449-12923
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220728144449-12923
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220728144449-12923
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220728150104-12923 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0728 15:02:13.958243   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220728150104-12923 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m20.926338646s)
--- PASS: TestJSONOutput/start/Command (80.93s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220728150104-12923 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220728150104-12923 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.3s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220728150104-12923 --output=json --user=testUser
E0728 15:02:37.911708   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220728150104-12923 --output=json --user=testUser: (12.302963864s)
--- PASS: TestJSONOutput/stop/Command (12.30s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220728150241-12923 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220728150241-12923 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (336.377918ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"730b32e1-097d-4fde-9cad-6efd5cc65244","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220728150241-12923] minikube v1.26.0 on Darwin 12.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3d3764dd-9e12-47c1-8f78-f40e391eb9ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14555"}}
	{"specversion":"1.0","id":"db6b053f-ffad-4a60-92f4-951cc9404b67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig"}}
	{"specversion":"1.0","id":"e0c0a705-18c0-47f9-a30d-56a22e241748","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"3e0f9f07-8336-4af5-a6bd-fa3ed92374d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"99f3bfbf-5d3e-4714-ab52-9584d67284d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube"}}
	{"specversion":"1.0","id":"a4aff601-3772-46a5-ac6a-397ce7b17d16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220728150241-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220728150241-12923
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (29.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220728150242-12923 --network=
E0728 15:03:05.603489   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220728150242-12923 --network=: (27.015257004s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220728150242-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220728150242-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220728150242-12923: (2.650370078s)
--- PASS: TestKicCustomNetwork/create_custom_network (29.73s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220728150311-12923 --network=bridge
E0728 15:03:37.012009   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220728150311-12923 --network=bridge: (26.523830663s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220728150311-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220728150311-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220728150311-12923: (2.494142879s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.08s)

                                                
                                    
x
+
TestKicExistingNetwork (29.72s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220728150341-12923 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220728150341-12923 --network=existing-network: (26.820625028s)
helpers_test.go:175: Cleaning up "existing-network-20220728150341-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220728150341-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220728150341-12923: (2.491344304s)
--- PASS: TestKicExistingNetwork (29.72s)

                                                
                                    
x
+
TestKicCustomSubnet (30.03s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-20220728150410-12923 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-20220728150410-12923 --subnet=192.168.60.0/24: (27.271134235s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220728150410-12923 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220728150410-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-20220728150410-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-20220728150410-12923: (2.690385716s)
--- PASS: TestKicCustomSubnet (30.03s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (61.78s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-20220728150440-12923 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-20220728150440-12923 --driver=docker : (27.541622256s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-20220728150440-12923 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-20220728150440-12923 --driver=docker : (26.876761434s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-20220728150440-12923
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-20220728150440-12923
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220728150440-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-20220728150440-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-20220728150440-12923: (2.72984687s)
helpers_test.go:175: Cleaning up "first-20220728150440-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-20220728150440-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-20220728150440-12923: (2.675149375s)
--- PASS: TestMinikubeProfile (61.78s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220728150542-12923 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220728150542-12923 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.385446367s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220728150542-12923 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220728150542-12923 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220728150542-12923 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.319873373s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220728150542-12923 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.22s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220728150542-12923 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220728150542-12923 --alsologtostderr -v=5: (2.216508689s)
--- PASS: TestMountStart/serial/DeleteFirst (2.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220728150542-12923 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220728150542-12923
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220728150542-12923: (1.635892958s)
--- PASS: TestMountStart/serial/Stop (1.64s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220728150542-12923
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220728150542-12923: (4.219795011s)
--- PASS: TestMountStart/serial/RestartStopped (5.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220728150542-12923 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (117.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220728150610-12923 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0728 15:07:13.953213   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 15:07:37.907490   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220728150610-12923 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m56.364318577s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (117.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:479: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (1.651585026s)
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- rollout status deployment/busybox: (3.339535387s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- exec busybox-d46db594c-jwp7z -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- exec busybox-d46db594c-vg2w2 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- exec busybox-d46db594c-jwp7z -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- exec busybox-d46db594c-vg2w2 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- exec busybox-d46db594c-jwp7z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- exec busybox-d46db594c-vg2w2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- exec busybox-d46db594c-jwp7z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- exec busybox-d46db594c-jwp7z -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- exec busybox-d46db594c-vg2w2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220728150610-12923 -- exec busybox-d46db594c-vg2w2 -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (34.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220728150610-12923 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220728150610-12923 -v 3 --alsologtostderr: (33.552471087s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status --alsologtostderr: (1.043542276s)
--- PASS: TestMultiNode/serial/AddNode (34.60s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.50s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (15.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status --output json --alsologtostderr: (1.061567885s)
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 cp testdata/cp-test.txt multinode-20220728150610-12923:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 cp multinode-20220728150610-12923:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile581616719/001/cp-test_multinode-20220728150610-12923.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 cp multinode-20220728150610-12923:/home/docker/cp-test.txt multinode-20220728150610-12923-m02:/home/docker/cp-test_multinode-20220728150610-12923_multinode-20220728150610-12923-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m02 "sudo cat /home/docker/cp-test_multinode-20220728150610-12923_multinode-20220728150610-12923-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 cp multinode-20220728150610-12923:/home/docker/cp-test.txt multinode-20220728150610-12923-m03:/home/docker/cp-test_multinode-20220728150610-12923_multinode-20220728150610-12923-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m03 "sudo cat /home/docker/cp-test_multinode-20220728150610-12923_multinode-20220728150610-12923-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 cp testdata/cp-test.txt multinode-20220728150610-12923-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 cp multinode-20220728150610-12923-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile581616719/001/cp-test_multinode-20220728150610-12923-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 cp multinode-20220728150610-12923-m02:/home/docker/cp-test.txt multinode-20220728150610-12923:/home/docker/cp-test_multinode-20220728150610-12923-m02_multinode-20220728150610-12923.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923 "sudo cat /home/docker/cp-test_multinode-20220728150610-12923-m02_multinode-20220728150610-12923.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 cp multinode-20220728150610-12923-m02:/home/docker/cp-test.txt multinode-20220728150610-12923-m03:/home/docker/cp-test_multinode-20220728150610-12923-m02_multinode-20220728150610-12923-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m03 "sudo cat /home/docker/cp-test_multinode-20220728150610-12923-m02_multinode-20220728150610-12923-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 cp testdata/cp-test.txt multinode-20220728150610-12923-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 cp multinode-20220728150610-12923-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile581616719/001/cp-test_multinode-20220728150610-12923-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 cp multinode-20220728150610-12923-m03:/home/docker/cp-test.txt multinode-20220728150610-12923:/home/docker/cp-test_multinode-20220728150610-12923-m03_multinode-20220728150610-12923.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923 "sudo cat /home/docker/cp-test_multinode-20220728150610-12923-m03_multinode-20220728150610-12923.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 cp multinode-20220728150610-12923-m03:/home/docker/cp-test.txt multinode-20220728150610-12923-m02:/home/docker/cp-test_multinode-20220728150610-12923-m03_multinode-20220728150610-12923-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 ssh -n multinode-20220728150610-12923-m02 "sudo cat /home/docker/cp-test_multinode-20220728150610-12923-m03_multinode-20220728150610-12923-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (15.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (14.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220728150610-12923 node stop m03: (12.38653072s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status: exit status 7 (787.023709ms)

                                                
                                                
-- stdout --
	multinode-20220728150610-12923
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220728150610-12923-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220728150610-12923-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status --alsologtostderr: exit status 7 (879.552602ms)

                                                
                                                
-- stdout --
	multinode-20220728150610-12923
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220728150610-12923-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220728150610-12923-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 15:09:18.890434   20649 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:09:18.890565   20649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:09:18.890570   20649 out.go:309] Setting ErrFile to fd 2...
	I0728 15:09:18.890574   20649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:09:18.890679   20649 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:09:18.890844   20649 out.go:303] Setting JSON to false
	I0728 15:09:18.890859   20649 mustload.go:65] Loading cluster: multinode-20220728150610-12923
	I0728 15:09:18.891137   20649 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:09:18.891149   20649 status.go:253] checking status of multinode-20220728150610-12923 ...
	I0728 15:09:18.891547   20649 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923 --format={{.State.Status}}
	I0728 15:09:18.955319   20649 status.go:328] multinode-20220728150610-12923 host status = "Running" (err=<nil>)
	I0728 15:09:18.955346   20649 host.go:66] Checking if "multinode-20220728150610-12923" exists ...
	I0728 15:09:18.955638   20649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923
	I0728 15:09:19.019562   20649 host.go:66] Checking if "multinode-20220728150610-12923" exists ...
	I0728 15:09:19.019823   20649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:09:19.019878   20649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:09:19.084879   20649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56312 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923/id_rsa Username:docker}
	I0728 15:09:19.173152   20649 ssh_runner.go:195] Run: systemctl --version
	I0728 15:09:19.177444   20649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:09:19.186383   20649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220728150610-12923
	I0728 15:09:19.251101   20649 kubeconfig.go:92] found "multinode-20220728150610-12923" server: "https://127.0.0.1:56311"
	I0728 15:09:19.251128   20649 api_server.go:165] Checking apiserver status ...
	I0728 15:09:19.251170   20649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0728 15:09:19.260436   20649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1667/cgroup
	W0728 15:09:19.267955   20649 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1667/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0728 15:09:19.267972   20649 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:56311/healthz ...
	I0728 15:09:19.273366   20649 api_server.go:266] https://127.0.0.1:56311/healthz returned 200:
	ok
	I0728 15:09:19.273380   20649 status.go:419] multinode-20220728150610-12923 apiserver status = Running (err=<nil>)
	I0728 15:09:19.273399   20649 status.go:255] multinode-20220728150610-12923 status: &{Name:multinode-20220728150610-12923 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 15:09:19.273412   20649 status.go:253] checking status of multinode-20220728150610-12923-m02 ...
	I0728 15:09:19.273633   20649 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923-m02 --format={{.State.Status}}
	I0728 15:09:19.337054   20649 status.go:328] multinode-20220728150610-12923-m02 host status = "Running" (err=<nil>)
	I0728 15:09:19.337076   20649 host.go:66] Checking if "multinode-20220728150610-12923-m02" exists ...
	I0728 15:09:19.337358   20649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220728150610-12923-m02
	I0728 15:09:19.496684   20649 host.go:66] Checking if "multinode-20220728150610-12923-m02" exists ...
	I0728 15:09:19.496937   20649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0728 15:09:19.496985   20649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220728150610-12923-m02
	I0728 15:09:19.561220   20649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56379 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/machines/multinode-20220728150610-12923-m02/id_rsa Username:docker}
	I0728 15:09:19.646341   20649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0728 15:09:19.655165   20649 status.go:255] multinode-20220728150610-12923-m02 status: &{Name:multinode-20220728150610-12923-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0728 15:09:19.655188   20649 status.go:253] checking status of multinode-20220728150610-12923-m03 ...
	I0728 15:09:19.655452   20649 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923-m03 --format={{.State.Status}}
	I0728 15:09:19.719815   20649 status.go:328] multinode-20220728150610-12923-m03 host status = "Stopped" (err=<nil>)
	I0728 15:09:19.719835   20649 status.go:341] host is not running, skipping remaining checks
	I0728 15:09:19.719840   20649 status.go:255] multinode-20220728150610-12923-m03 status: &{Name:multinode-20220728150610-12923-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (14.05s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (19.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220728150610-12923 node start m03 --alsologtostderr: (18.133542577s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status: (1.01788624s)
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (19.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (7.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220728150610-12923 node delete m03: (7.06615289s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (7.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 stop
E0728 15:14:00.990349   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220728150610-12923 stop: (24.680371775s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status: exit status 7 (234.360575ms)

                                                
                                                
-- stdout --
	multinode-20220728150610-12923
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220728150610-12923-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status --alsologtostderr: exit status 7 (175.55036ms)

                                                
                                                
-- stdout --
	multinode-20220728150610-12923
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220728150610-12923-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0728 15:14:14.592586   21362 out.go:296] Setting OutFile to fd 1 ...
	I0728 15:14:14.592781   21362 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:14:14.592786   21362 out.go:309] Setting ErrFile to fd 2...
	I0728 15:14:14.592791   21362 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0728 15:14:14.592888   21362 root.go:332] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/bin
	I0728 15:14:14.593049   21362 out.go:303] Setting JSON to false
	I0728 15:14:14.593064   21362 mustload.go:65] Loading cluster: multinode-20220728150610-12923
	I0728 15:14:14.593358   21362 config.go:178] Loaded profile config "multinode-20220728150610-12923": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
	I0728 15:14:14.593370   21362 status.go:253] checking status of multinode-20220728150610-12923 ...
	I0728 15:14:14.593708   21362 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923 --format={{.State.Status}}
	I0728 15:14:14.655580   21362 status.go:328] multinode-20220728150610-12923 host status = "Stopped" (err=<nil>)
	I0728 15:14:14.655604   21362 status.go:341] host is not running, skipping remaining checks
	I0728 15:14:14.655611   21362 status.go:255] multinode-20220728150610-12923 status: &{Name:multinode-20220728150610-12923 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0728 15:14:14.655634   21362 status.go:253] checking status of multinode-20220728150610-12923-m02 ...
	I0728 15:14:14.655875   21362 cli_runner.go:164] Run: docker container inspect multinode-20220728150610-12923-m02 --format={{.State.Status}}
	I0728 15:14:14.717806   21362 status.go:328] multinode-20220728150610-12923-m02 host status = "Stopped" (err=<nil>)
	I0728 15:14:14.717833   21362 status.go:341] host is not running, skipping remaining checks
	I0728 15:14:14.717840   21362 status.go:255] multinode-20220728150610-12923-m02 status: &{Name:multinode-20220728150610-12923-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (76.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220728150610-12923 --wait=true -v=8 --alsologtostderr --driver=docker 
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220728150610-12923 --wait=true -v=8 --alsologtostderr --driver=docker : (1m14.589414997s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220728150610-12923 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:372: (dbg) Done: kubectl get nodes: (1.484042192s)
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (76.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220728150610-12923
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220728150610-12923-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220728150610-12923-m02 --driver=docker : exit status 14 (383.024916ms)

                                                
                                                
-- stdout --
	* [multinode-20220728150610-12923-m02] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220728150610-12923-m02' is duplicated with machine name 'multinode-20220728150610-12923-m02' in profile 'multinode-20220728150610-12923'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220728150610-12923-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220728150610-12923-m03 --driver=docker : (26.922911804s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220728150610-12923
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220728150610-12923: exit status 80 (605.823053ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220728150610-12923
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220728150610-12923-m03 already exists in multinode-20220728150610-12923-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220728150610-12923-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220728150610-12923-m03: (2.676292687s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.64s)

                                                
                                    
x
+
TestScheduledStopUnix (100.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220728152035-12923 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220728152035-12923 --memory=2048 --driver=docker : (26.366753685s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220728152035-12923 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220728152035-12923 -n scheduled-stop-20220728152035-12923
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220728152035-12923 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220728152035-12923 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220728152035-12923 -n scheduled-stop-20220728152035-12923
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220728152035-12923
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220728152035-12923 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0728 15:22:13.976941   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220728152035-12923
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220728152035-12923: exit status 7 (116.104062ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220728152035-12923
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220728152035-12923 -n scheduled-stop-20220728152035-12923
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220728152035-12923 -n scheduled-stop-20220728152035-12923: exit status 7 (112.939252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220728152035-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220728152035-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220728152035-12923: (2.416652476s)
--- PASS: TestScheduledStopUnix (100.76s)

                                                
                                    
x
+
TestSkaffold (61.08s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe214449410 version
skaffold_test.go:63: skaffold version: v1.39.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220728152216-12923 --memory=2600 --driver=docker 
E0728 15:22:37.930970   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220728152216-12923 --memory=2600 --driver=docker : (28.126548887s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe214449410 run --minikube-profile skaffold-20220728152216-12923 --kube-context skaffold-20220728152216-12923 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe214449410 run --minikube-profile skaffold-20220728152216-12923 --kube-context skaffold-20220728152216-12923 --status-check=true --port-forward=false --interactive=false: (18.229558919s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-6667c8f89f-2s4p4" [fd7d1f33-1ee1-4c65-928f-5be52c2d3484] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.014020482s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-5fdc4c5966-r67hf" [b7fb0ed8-2d8a-4b10-84aa-12fe65b3d2e3] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.007959683s
helpers_test.go:175: Cleaning up "skaffold-20220728152216-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220728152216-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220728152216-12923: (2.976797507s)
--- PASS: TestSkaffold (61.08s)

                                                
                                    
x
+
TestInsufficientStorage (12.63s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220728152317-12923 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220728152317-12923 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.385945647s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aeffbe9a-4cee-4171-8dd1-dc6e2cf2301e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220728152317-12923] minikube v1.26.0 on Darwin 12.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0088722-46b0-4496-9f8b-d3fd5df34fec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14555"}}
	{"specversion":"1.0","id":"67f4f157-be42-4a1c-b5d0-e82159709e5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig"}}
	{"specversion":"1.0","id":"38d56f64-1170-4f34-afb6-acb6348a9737","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"e26d6b21-4158-42b8-91f0-73c795d7b7df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"036f15e7-4e36-4f56-b62e-93610f72ea76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube"}}
	{"specversion":"1.0","id":"6f99f6cb-928c-44ea-97b0-e39f12dc4e75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a689f4b2-72fd-41e0-87d4-3e5200d97120","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6ef7f86e-5c34-4f7b-b82a-b8ec855daccb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e71f14c8-2c3a-47e5-af70-4831afb79c38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"96294a34-82d9-45a1-b4e3-4c224546bf37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220728152317-12923 in cluster insufficient-storage-20220728152317-12923","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"784dd6b9-6141-4d56-b09c-98989117fd29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"35b7f617-73da-4a83-ab27-45e619a38a49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6eea8156-63bf-4ca1-bbc7-95a7524efd14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220728152317-12923 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220728152317-12923 --output=json --layout=cluster: exit status 7 (405.278655ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220728152317-12923","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220728152317-12923","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:23:27.564215   22963 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220728152317-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220728152317-12923 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220728152317-12923 --output=json --layout=cluster: exit status 7 (407.424779ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220728152317-12923","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.26.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220728152317-12923","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0728 15:23:27.971971   22973 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220728152317-12923" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	E0728 15:23:27.980248   22973 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/insufficient-storage-20220728152317-12923/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220728152317-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220728152317-12923
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220728152317-12923: (2.435184551s)
--- PASS: TestInsufficientStorage (12.63s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.49s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.26.0 on darwin
- MINIKUBE_LOCATION=14555
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2220956785/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2220956785/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2220956785/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2220956785/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.49s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.34s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.26.0 on darwin
- MINIKUBE_LOCATION=14555
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3895987130/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3895987130/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3895987130/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3895987130/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220728152857-12923
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220728152857-12923: (3.54182878s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.54s)

                                                
                                    
x
+
TestPause/serial/Start (42.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220728152948-12923 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220728152948-12923 --memory=2048 --install-addons=false --wait=all --driver=docker : (42.837037066s)
--- PASS: TestPause/serial/Start (42.84s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220728152948-12923 --alsologtostderr -v=1 --driver=docker 
E0728 15:30:40.993834   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 15:30:48.634844   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220728152948-12923 --alsologtostderr -v=1 --driver=docker : (34.408048864s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.42s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220728152948-12923 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220728153211-12923 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220728153211-12923 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (339.236868ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220728153211-12923] minikube v1.26.0 on Darwin 12.5
	  - MINIKUBE_LOCATION=14555
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220728153211-12923 --driver=docker 
E0728 15:32:13.983357   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220728153211-12923 --driver=docker : (28.928797336s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220728153211-12923 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220728153211-12923 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220728153211-12923 --no-kubernetes --driver=docker : (14.275752066s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220728153211-12923 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220728153211-12923 status -o json: exit status 2 (422.777208ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220728153211-12923","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220728153211-12923
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220728153211-12923: (2.480353333s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220728153211-12923 --no-kubernetes --driver=docker 
E0728 15:33:04.787923   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220728153211-12923 --no-kubernetes --driver=docker : (6.688730238s)
--- PASS: TestNoKubernetes/serial/Start (6.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220728153211-12923 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220728153211-12923 "sudo systemctl is-active --quiet service kubelet": exit status 1 (404.020915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (3.291062698s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220728153211-12923

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220728153211-12923: (1.660259188s)
--- PASS: TestNoKubernetes/serial/Stop (1.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220728153211-12923 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220728153211-12923 --driver=docker : (5.84930394s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (5.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220728153211-12923 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220728153211-12923 "sudo systemctl is-active --quiet service kubelet": exit status 1 (406.631286ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220728152330-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220728152330-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (46.312614345s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220728152331-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
E0728 15:33:32.473038   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-20220728152331-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (50.656304312s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (50.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220728152330-12923 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220728152330-12923 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context auto-20220728152330-12923 replace --force -f testdata/netcat-deployment.yaml: (1.735028278s)
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-kmfq6" [aecaf09d-2c08-4338-8700-c316e9fecb69] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-kmfq6" [aecaf09d-2c08-4338-8700-c316e9fecb69] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.009753937s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-86cpr" [7edca099-3cc9-4b58-b425-e0a680d09a45] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.01598895s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-20220728152331-12923 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220728152331-12923 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Done: kubectl --context kindnet-20220728152331-12923 replace --force -f testdata/netcat-deployment.yaml: (1.600125138s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-7hxp6" [cbf7c110-09ef-47af-82c9-0ef1f6c84838] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-7hxp6" [cbf7c110-09ef-47af-82c9-0ef1f6c84838] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007952745s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220728152330-12923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.111138003s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (78.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220728152331-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220728152331-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m18.205100903s)
--- PASS: TestNetworkPlugins/group/cilium/Start (78.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220728152331-12923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220728152331-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220728152331-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220728152331-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-20220728152331-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (1m13.653466935s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-p26dr" [eac27851-6a48-48f2-b8f3-1156d3d87fdf] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.016903923s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-p4zbx" [9ab8ed90-4b3a-4203-bac5-0eea01cc181e] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.016784898s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220728152331-12923 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (13.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220728152331-12923 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Done: kubectl --context cilium-20220728152331-12923 replace --force -f testdata/netcat-deployment.yaml: (2.046375002s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-kljdm" [fd35f710-e878-4758-bb30-faea35784173] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-kljdm" [fd35f710-e878-4758-bb30-faea35784173] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-kljdm" [fd35f710-e878-4758-bb30-faea35784173] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 11.009803951s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (13.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-20220728152331-12923 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220728152331-12923 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Done: kubectl --context calico-20220728152331-12923 replace --force -f testdata/netcat-deployment.yaml: (1.720718724s)
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-mhrkb" [4a187eb9-5441-4915-8f0c-2d1b4a8becba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-869c55b6dc-mhrkb" [4a187eb9-5441-4915-8f0c-2d1b4a8becba] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.008319048s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220728152331-12923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220728152331-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220728152331-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220728152331-12923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-20220728152331-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-20220728152331-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (81.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220728152331-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220728152331-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (1m21.815397235s)
--- PASS: TestNetworkPlugins/group/false/Start (81.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220728152330-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220728152330-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (43.957688264s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220728152330-12923 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220728152330-12923 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context bridge-20220728152330-12923 replace --force -f testdata/netcat-deployment.yaml: (1.568386435s)
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-blk7w" [321e7dee-4e1d-4486-8561-ecb29b367a07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0728 15:36:57.036494   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
helpers_test.go:342: "netcat-869c55b6dc-blk7w" [321e7dee-4e1d-4486-8561-ecb29b367a07] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.0102015s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220728152330-12923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (43.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220728152330-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E0728 15:37:13.979137   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220728152330-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (43.700073942s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (43.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220728152331-12923 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-20220728152331-12923 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context false-20220728152331-12923 replace --force -f testdata/netcat-deployment.yaml: (1.52623574s)
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-sld9w" [afef2780-7253-4cfe-bcb7-4615d8a8264e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-sld9w" [afef2780-7253-4cfe-bcb7-4615d8a8264e] Running
E0728 15:37:37.932372   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.007084821s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-20220728152331-12923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-20220728152331-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-20220728152331-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-20220728152331-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.112908708s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (45.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220728152330-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20220728152330-12923 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (45.902190455s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (45.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220728152330-12923 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220728152330-12923 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context enable-default-cni-20220728152330-12923 replace --force -f testdata/netcat-deployment.yaml: (1.763562846s)
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-8zl8b" [6ae1ad56-d985-4a58-8dde-5e375847bf3b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-8zl8b" [6ae1ad56-d985-4a58-8dde-5e375847bf3b] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.007580286s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220728152330-12923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20220728152330-12923 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-20220728152330-12923 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context kubenet-20220728152330-12923 replace --force -f testdata/netcat-deployment.yaml: (1.564116839s)
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-869c55b6dc-fxq2g" [b02aa864-3001-4a76-a187-091747ce27f9] Pending
helpers_test.go:342: "netcat-869c55b6dc-fxq2g" [b02aa864-3001-4a76-a187-091747ce27f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-869c55b6dc-fxq2g" [b02aa864-3001-4a76-a187-091747ce27f9] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.008325542s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-20220728152330-12923 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220728153949-12923 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3
E0728 15:39:51.563530   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:29.058591   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:40:32.523660   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220728153949-12923 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3: (50.930431107s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220728153949-12923 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context no-preload-20220728153949-12923 create -f testdata/busybox.yaml: (1.624612964s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [633b8b7d-b53b-4408-bd4d-f4faf4b3b6cf] Pending
helpers_test.go:342: "busybox" [633b8b7d-b53b-4408-bd4d-f4faf4b3b6cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0728 15:40:43.568500   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:43.574939   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:43.585121   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:43.605421   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:43.646101   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:43.727168   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:43.887328   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:44.208369   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:44.850301   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
helpers_test.go:342: "busybox" [633b8b7d-b53b-4408-bd4d-f4faf4b3b6cf] Running
E0728 15:40:45.940762   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:45.945844   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:45.955953   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:45.976854   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:46.017592   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:46.097861   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:46.130587   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:46.258719   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:46.579595   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:47.219964   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:48.501110   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:48.690797   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.012688423s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-20220728153949-12923 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220728153949-12923 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-20220728153949-12923 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220728153949-12923 --alsologtostderr -v=3
E0728 15:40:51.061518   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:53.813066   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:40:56.181980   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220728153949-12923 --alsologtostderr -v=3: (12.471466575s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923: exit status 7 (113.859638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220728153949-12923 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (302.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220728153949-12923 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3
E0728 15:41:04.053388   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:41:06.422322   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:41:24.533719   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:41:26.902552   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:41:50.979592   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:41:52.850696   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:41:52.855821   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:41:52.865984   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:41:52.886149   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:41:52.926592   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:41:53.008644   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:41:53.170863   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:41:53.493144   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:41:54.134803   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:41:54.442646   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:41:55.415080   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:41:57.975287   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:03.097581   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:05.493314   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:07.862060   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:42:13.337752   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:42:13.978873   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220728153949-12923 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.3: (5m2.100858166s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220728153949-12923 -n no-preload-20220728153949-12923
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (302.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220728153807-12923 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220728153807-12923 --alsologtostderr -v=3: (1.61738071s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220728153807-12923 -n old-k8s-version-20220728153807-12923: exit status 7 (116.718884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220728153807-12923 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-j2qnw" [ab9a025e-16ee-44e3-a3ee-97431afb9113] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-j2qnw" [ab9a025e-16ee-44e3-a3ee-97431afb9113] Running
E0728 15:46:11.249708   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.016460585s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-j2qnw" [ab9a025e-16ee-44e3-a3ee-97431afb9113] Running
E0728 15:46:13.621237   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007889457s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-20220728153949-12923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context no-preload-20220728153949-12923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.717023827s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220728153949-12923 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220728154707-12923 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3
E0728 15:47:13.972064   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/addons-20220728143944-12923/client.crt: no such file or directory
E0728 15:47:20.535816   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory
E0728 15:47:20.981781   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 15:47:29.350666   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
E0728 15:47:37.926459   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/functional-20220728144449-12923/client.crt: no such file or directory
E0728 15:47:51.975352   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220728154707-12923 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3: (45.072179864s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220728154707-12923 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context embed-certs-20220728154707-12923 create -f testdata/busybox.yaml: (1.686295763s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [17f2da87-d54c-482a-a629-f247e9b436fa] Pending
helpers_test.go:342: "busybox" [17f2da87-d54c-482a-a629-f247e9b436fa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [17f2da87-d54c-482a-a629-f247e9b436fa] Running
E0728 15:47:57.073658   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.012328356s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-20220728154707-12923 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220728154707-12923 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-20220728154707-12923 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220728154707-12923 --alsologtostderr -v=3
E0728 15:48:04.778251   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/skaffold-20220728152216-12923/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220728154707-12923 --alsologtostderr -v=3: (12.495785849s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923: exit status 7 (113.689411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220728154707-12923 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (299.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220728154707-12923 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3
E0728 15:48:19.669700   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/enable-default-cni-20220728152330-12923/client.crt: no such file or directory
E0728 15:48:36.600402   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:49:04.291958   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kubenet-20220728152330-12923/client.crt: no such file or directory
E0728 15:49:07.120558   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/auto-20220728152330-12923/client.crt: no such file or directory
E0728 15:49:10.588977   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory
E0728 15:50:41.931399   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:50:41.937280   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:50:41.947481   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:50:41.968395   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:50:42.008527   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:50:42.088626   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:50:42.250421   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:50:42.570869   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:50:43.211012   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:50:43.558748   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
E0728 15:50:44.491579   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:50:45.932908   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
E0728 15:50:47.052111   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:50:52.172669   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:51:02.412638   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:51:22.892632   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 15:51:52.841325   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/bridge-20220728152330-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220728154707-12923 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.24.3: (4m59.441651949s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220728154707-12923 -n embed-certs-20220728154707-12923
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (299.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-nj4nt" [4ebca962-177f-4a70-9d72-b89712a84628] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-nj4nt" [4ebca962-177f-4a70-9d72-b89712a84628] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.012232335s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-nj4nt" [4ebca962-177f-4a70-9d72-b89712a84628] Running
E0728 15:53:25.771992   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009298303s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-20220728154707-12923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context embed-certs-20220728154707-12923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.554001481s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220728154707-12923 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (46.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220728155420-12923 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220728155420-12923 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3: (46.150792186s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (46.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220728155420-12923 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Done: kubectl --context default-k8s-different-port-20220728155420-12923 create -f testdata/busybox.yaml: (1.581333168s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [b31168d3-e66c-4d52-856e-fff847827222] Pending
helpers_test.go:342: "busybox" [b31168d3-e66c-4d52-856e-fff847827222] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [b31168d3-e66c-4d52-856e-fff847827222] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.012701731s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-different-port-20220728155420-12923 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220728155420-12923 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-different-port-20220728155420-12923 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (12.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220728155420-12923 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220728155420-12923 --alsologtostderr -v=3: (12.574628754s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (12.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923: exit status 7 (115.781627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220728155420-12923 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (303.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220728155420-12923 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3
E0728 15:55:33.653334   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/kindnet-20220728152331-12923/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220728155420-12923 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.24.3: (5m2.590697356s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220728155420-12923 -n default-k8s-different-port-20220728155420-12923
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (303.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-7sfms" [98861945-a85a-4fc2-8f87-03ab3cd624cf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-7sfms" [98861945-a85a-4fc2-8f87-03ab3cd624cf] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.012935564s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-5fd5574d9f-7sfms" [98861945-a85a-4fc2-8f87-03ab3cd624cf] Running
E0728 16:00:41.922406   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/no-preload-20220728153949-12923/client.crt: no such file or directory
E0728 16:00:43.549026   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/cilium-20220728152331-12923/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009608523s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-different-port-20220728155420-12923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0728 16:00:45.920904   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/calico-20220728152331-12923/client.crt: no such file or directory
start_stop_delete_test.go:291: (dbg) Done: kubectl --context default-k8s-different-port-20220728155420-12923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.548799012s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220728155420-12923 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220728160133-12923 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220728160133-12923 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3: (42.029514622s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220728160133-12923 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220728160133-12923 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220728160133-12923 --alsologtostderr -v=3: (12.489256083s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923: exit status 7 (114.415861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220728160133-12923 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
E0728 16:02:29.337130   12923 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--14555-11737-f5b83d4f17589580a6f3f2e048ea841d5f2ba2cd/.minikube/profiles/false-20220728152331-12923/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220728160133-12923 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220728160133-12923 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.24.3: (17.326656867s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220728160133-12923 -n newest-cni-20220728160133-12923
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220728160133-12923 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                    

Test skip (18/289)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.24.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.24.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.24.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.24.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: registry stabilized in 10.709453ms
addons_test.go:284: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-rxtj6" [431aa29c-c406-4e4e-b9f0-448f06e4a356] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010230713s
addons_test.go:287: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-l8c5f" [a25c3fc2-d2b4-4af6-b3cc-7627fc1a4d16] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009696295s
addons_test.go:292: (dbg) Run:  kubectl --context addons-20220728143944-12923 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) Done: kubectl --context addons-20220728143944-12923 delete po -l run=registry-test --now: (2.591082088s)
addons_test.go:297: (dbg) Run:  kubectl --context addons-20220728143944-12923 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) Done: kubectl --context addons-20220728143944-12923 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.199397068s)
addons_test.go:307: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (17.83s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:164: (dbg) Run:  kubectl --context addons-20220728143944-12923 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:184: (dbg) Run:  kubectl --context addons-20220728143944-12923 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:197: (dbg) Run:  kubectl --context addons-20220728143944-12923 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [ffe56c24-d252-4ea9-83ce-c1122bd84e39] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [ffe56c24-d252-4ea9-83ce-c1122bd84e39] Running
addons_test.go:202: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 22.010488768s
addons_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220728143944-12923 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:234: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (23.20s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:450: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220728144449-12923 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220728144449-12923 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-578cdc45cb-zds9n" [eec0da7b-1396-49b5-9047-68db7deedfb9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-578cdc45cb-zds9n" [eec0da7b-1396-49b5-9047-68db7deedfb9] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.009094201s
functional_test.go:1575: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (14.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220728152330-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220728152330-12923
--- SKIP: TestNetworkPlugins/group/flannel (0.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-20220728152331-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-20220728152331-12923
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.58s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220728155419-12923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220728155419-12923
--- SKIP: TestStartStop/group/disable-driver-mounts (0.44s)

                                                
                                    
Copied to clipboard