Test Report: Docker_macOS 13326

                    
                      7d129a660e0abf125cce994bee2942d8ab6dd57f:2022-01-25:22392
                    
                

Test fail (11/281)

x
+
TestDownloadOnly/v1.16.0/preload-exists (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
aaa_download_only_test.go:109: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.16.0-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/preload-exists (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/preload-exists
aaa_download_only_test.go:109: failed to verify preloaded tarball file exists: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4: no such file or directory
--- FAIL: TestDownloadOnly/v1.23.2/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.03s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-20220125155905-11219 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:230: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-20220125155905-11219 --force --alsologtostderr --driver=docker : (2.310811528s)
aaa_download_only_test.go:238: failed to read tarball file "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4": open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4: no such file or directory
aaa_download_only_test.go:248: failed to read checksum file "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4.checksum" : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4.checksum: no such file or directory
aaa_download_only_test.go:251: failed to verify checksum. checksum of "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4" does not match remote checksum ("" != "\xd4\x1d\x8cُ\x00\xb2\x04\xe9\x80\t\x98\xec\xf8B~")
helpers_test.go:176: Cleaning up "download-docker-20220125155905-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-20220125155905-11219
--- FAIL: TestDownloadOnlyKic (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (304.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:906: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220125160520-11219 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:919: output didn't produce a URL
functional_test.go:911: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220125160520-11219 --alsologtostderr -v=1] ...
functional_test.go:911: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220125160520-11219 --alsologtostderr -v=1] stdout:
functional_test.go:911: (dbg) [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-20220125160520-11219 --alsologtostderr -v=1] stderr:
I0125 16:09:57.632828   14114 out.go:297] Setting OutFile to fd 1 ...
I0125 16:09:57.633801   14114 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0125 16:09:57.633808   14114 out.go:310] Setting ErrFile to fd 2...
I0125 16:09:57.633812   14114 out.go:344] TERM=,COLORTERM=, which probably does not support color
I0125 16:09:57.633884   14114 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/bin
I0125 16:09:57.634071   14114 mustload.go:65] Loading cluster: functional-20220125160520-11219
I0125 16:09:57.634360   14114 config.go:176] Loaded profile config "functional-20220125160520-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
I0125 16:09:57.634705   14114 cli_runner.go:133] Run: docker container inspect functional-20220125160520-11219 --format={{.State.Status}}
I0125 16:09:57.742607   14114 host.go:66] Checking if "functional-20220125160520-11219" exists ...
I0125 16:09:57.742948   14114 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-20220125160520-11219
I0125 16:09:57.853681   14114 api_server.go:165] Checking apiserver status ...
I0125 16:09:57.853792   14114 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0125 16:09:57.853880   14114 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220125160520-11219
I0125 16:09:58.009783   14114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61868 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/functional-20220125160520-11219/id_rsa Username:docker}
I0125 16:09:58.117016   14114 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6009/cgroup
I0125 16:09:58.125922   14114 api_server.go:181] apiserver freezer: "7:freezer:/docker/1d09011e1335454602abd1a4e331953b9a592ddf22d8c990341e894fbdb829c4/kubepods/burstable/pod60e42e92ce4cf504907f1eca6268ff31/a9e46576f8c184db8fad8523ce733f400660dd239f903914ab67ac6e07253bf9"
I0125 16:09:58.126021   14114 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1d09011e1335454602abd1a4e331953b9a592ddf22d8c990341e894fbdb829c4/kubepods/burstable/pod60e42e92ce4cf504907f1eca6268ff31/a9e46576f8c184db8fad8523ce733f400660dd239f903914ab67ac6e07253bf9/freezer.state
I0125 16:09:58.133568   14114 api_server.go:203] freezer state: "THAWED"
I0125 16:09:58.133591   14114 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:61872/healthz ...
I0125 16:09:58.139395   14114 api_server.go:266] https://127.0.0.1:61872/healthz returned 200:
ok
W0125 16:09:58.139425   14114 out.go:241] * Enabling dashboard ...
* Enabling dashboard ...
I0125 16:09:58.139602   14114 config.go:176] Loaded profile config "functional-20220125160520-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
I0125 16:09:58.139613   14114 addons.go:65] Setting dashboard=true in profile "functional-20220125160520-11219"
I0125 16:09:58.139622   14114 addons.go:153] Setting addon dashboard=true in "functional-20220125160520-11219"
I0125 16:09:58.139642   14114 host.go:66] Checking if "functional-20220125160520-11219" exists ...
I0125 16:09:58.139987   14114 cli_runner.go:133] Run: docker container inspect functional-20220125160520-11219 --format={{.State.Status}}
I0125 16:09:58.273767   14114 out.go:176]   - Using image kubernetesui/dashboard:v2.3.1
I0125 16:09:58.298514   14114 out.go:176]   - Using image kubernetesui/metrics-scraper:v1.0.7
I0125 16:09:58.298634   14114 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0125 16:09:58.298651   14114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0125 16:09:58.298805   14114 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-20220125160520-11219
I0125 16:09:58.407371   14114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61868 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/functional-20220125160520-11219/id_rsa Username:docker}
I0125 16:09:58.507823   14114 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0125 16:09:58.507838   14114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0125 16:09:58.520270   14114 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0125 16:09:58.520282   14114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0125 16:09:58.532147   14114 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0125 16:09:58.532158   14114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0125 16:09:58.544965   14114 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0125 16:09:58.544975   14114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4278 bytes)
I0125 16:09:58.557940   14114 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
I0125 16:09:58.557954   14114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0125 16:09:58.571562   14114 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0125 16:09:58.571574   14114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0125 16:09:58.583981   14114 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0125 16:09:58.583992   14114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0125 16:09:58.596958   14114 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0125 16:09:58.596969   14114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0125 16:09:58.609662   14114 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0125 16:09:58.609674   14114 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0125 16:09:58.622255   14114 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0125 16:09:59.032521   14114 addons.go:116] Writing out "functional-20220125160520-11219" config to set dashboard=true...
W0125 16:09:59.033013   14114 out.go:241] * Verifying dashboard health ...
* Verifying dashboard health ...
I0125 16:09:59.034527   14114 kapi.go:59] client config for functional-20220125160520-11219: &rest.Config{Host:"https://127.0.0.1:61872", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-1121
9/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21cd640), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0125 16:09:59.054081   14114 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  839e28ff-5dd7-4aee-9736-556ff2f88921 688 0 2022-01-25 16:09:58 -0800 PST <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] []  [{kubectl-client-side-apply Update v1 2022-01-25 16:09:58 -0800 PST FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.107.180.167,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.107.180.167],IPFamilies:[IPv4],AllocateLoadBala
ncerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0125 16:09:59.054210   14114 out.go:241] * Launching proxy ...
* Launching proxy ...
I0125 16:09:59.054310   14114 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-20220125160520-11219 proxy --port 36195]
I0125 16:09:59.055915   14114 dashboard.go:157] Waiting for kubectl to output host:port ...
I0125 16:09:59.096978   14114 dashboard.go:175] proxy stdout: Starting to serve on  27.0.0.1:36195
W0125 16:09:59.097029   14114 out.go:241] * Verifying proxy health ...
* Verifying proxy health ...
I0125 16:09:59.097068   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.097118   14114 retry.go:31] will retry after 110.466µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.097285   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.097297   14114 retry.go:31] will retry after 216.077µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.097650   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.097662   14114 retry.go:31] will retry after 262.026µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.098107   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.098125   14114 retry.go:31] will retry after 316.478µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.098647   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.098659   14114 retry.go:31] will retry after 468.098µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.099291   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.099307   14114 retry.go:31] will retry after 901.244µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.100275   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.100287   14114 retry.go:31] will retry after 644.295µs: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.100984   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.101008   14114 retry.go:31] will retry after 1.121724ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.102298   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.102312   14114 retry.go:31] will retry after 1.529966ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.104174   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.104189   14114 retry.go:31] will retry after 3.078972ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.107377   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.107448   14114 retry.go:31] will retry after 5.854223ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.113552   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.113576   14114 retry.go:31] will retry after 11.362655ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.129380   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.129420   14114 retry.go:31] will retry after 9.267303ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.138899   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.138933   14114 retry.go:31] will retry after 17.139291ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.158050   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.158082   14114 retry.go:31] will retry after 23.881489ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.182822   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.182852   14114 retry.go:31] will retry after 42.427055ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.229412   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.229456   14114 retry.go:31] will retry after 51.432832ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.288521   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.288580   14114 retry.go:31] will retry after 78.14118ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.372854   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.372884   14114 retry.go:31] will retry after 174.255803ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.547210   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.547245   14114 retry.go:31] will retry after 159.291408ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.707334   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.707370   14114 retry.go:31] will retry after 233.827468ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:09:59.947986   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:09:59.948033   14114 retry.go:31] will retry after 429.392365ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:10:00.380049   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:10:00.380079   14114 retry.go:31] will retry after 801.058534ms: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:10:01.185537   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:10:01.185588   14114 retry.go:31] will retry after 1.529087469s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:10:02.720843   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:10:02.720889   14114 retry.go:31] will retry after 1.335136154s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:10:04.056197   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:10:04.056236   14114 retry.go:31] will retry after 2.012724691s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:10:06.071111   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:10:06.071153   14114 retry.go:31] will retry after 4.744335389s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:10:10.820808   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:10:10.820839   14114 retry.go:31] will retry after 4.014454686s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:10:14.836263   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:10:14.836334   14114 retry.go:31] will retry after 11.635741654s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:10:26.474112   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:10:26.474157   14114 retry.go:31] will retry after 15.298130033s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:10:41.774264   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:10:41.774311   14114 retry.go:31] will retry after 19.631844237s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:11:01.408054   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:11:01.408115   14114 retry.go:31] will retry after 15.195386994s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:11:16.608906   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:11:16.608966   14114 retry.go:31] will retry after 28.402880652s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:11:45.019294   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:11:45.019362   14114 retry.go:31] will retry after 1m6.435206373s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:12:51.462677   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:12:51.462742   14114 retry.go:31] will retry after 1m28.514497132s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:14:19.981614   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:14:19.981673   14114 retry.go:31] will retry after 34.767217402s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
I0125 16:14:54.752040   14114 dashboard.go:212] http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL <nil>
I0125 16:14:54.752105   14114 retry.go:31] will retry after 1m5.688515861s: checkURL: Get "http:///api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/": http: no Host in request URL
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect functional-20220125160520-11219
helpers_test.go:236: (dbg) docker inspect functional-20220125160520-11219:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1d09011e1335454602abd1a4e331953b9a592ddf22d8c990341e894fbdb829c4",
	        "Created": "2022-01-26T00:05:32.462656153Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 31818,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-01-26T00:05:41.413061203Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/1d09011e1335454602abd1a4e331953b9a592ddf22d8c990341e894fbdb829c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1d09011e1335454602abd1a4e331953b9a592ddf22d8c990341e894fbdb829c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/1d09011e1335454602abd1a4e331953b9a592ddf22d8c990341e894fbdb829c4/hosts",
	        "LogPath": "/var/lib/docker/containers/1d09011e1335454602abd1a4e331953b9a592ddf22d8c990341e894fbdb829c4/1d09011e1335454602abd1a4e331953b9a592ddf22d8c990341e894fbdb829c4-json.log",
	        "Name": "/functional-20220125160520-11219",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-20220125160520-11219:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-20220125160520-11219",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9365a006ec489b1f533024223193c76559c09864ab8fbc53cc0198bfef68d9f9-init/diff:/var/lib/docker/overlay2/f6644e174c834e004f7c6a7fd84cb30249e653cc28149565c547eb0aa32b2b22/diff:/var/lib/docker/overlay2/968f387d844b747986ea23bbefc5cc36d27f1844123d64b03096b498a18227f0/diff:/var/lib/docker/overlay2/3203a36297bf74d3eefe39969197da20131c1144329e83fbc94f578c7183d9f4/diff:/var/lib/docker/overlay2/8e7880efcc82627322c02b71747b68679f667fb86571fe7533cf6a3d4b633356/diff:/var/lib/docker/overlay2/8795341a48190b7df6b12df8a17a4a2f2191bf7590cfa3f6470735930a3b667a/diff:/var/lib/docker/overlay2/02ea8e3eccc1e90d73656ef34f9421af3e293a438eb7e1905e2f87b778316786/diff:/var/lib/docker/overlay2/2aadeb562c286a175a060b66c8bc81d30d2b6de6c626522114af1e85ee2f3aee/diff:/var/lib/docker/overlay2/afe0fbc73729589e58d48d713b0aeb040b2d44035ffdee605a8e7e2babe0a2c3/diff:/var/lib/docker/overlay2/798e8675f5274e1f3c053597604a9d0636b6d456064cc94af9e1a5043428130e/diff:/var/lib/docker/overlay2/36d3ce
8c4bd049112c0fcfddc814a9c3bb9209c8edfd0584c32abd8768dfb7c6/diff:/var/lib/docker/overlay2/57011940c99ff9806222058d5c6c6aa07b2828eebc9a73f8a2b8f0157fb66f79/diff:/var/lib/docker/overlay2/452cf0420b92814a98512be6c74ad70cceacc404351244100b022f166a0b6a40/diff:/var/lib/docker/overlay2/49f36a3896f3e0c99ca1bb0a62416d83f28c2cdfcbc8ac9073f5fe2c6fe67a7d/diff:/var/lib/docker/overlay2/6b20e443c8c5775f2cd9cb99cb2b7d0254ff94ae51a6201094569a66fda56064/diff:/var/lib/docker/overlay2/6cc21e5ad7dfd64f6833e2c9278ad90447e59a0d58e54d1609ef3b05a86327a5/diff:/var/lib/docker/overlay2/d8027491b5695727611fc0fd4362991c33b9c494d093578d3293a874b11b70b0/diff:/var/lib/docker/overlay2/71565a6201e5f60cbbb031d8ad83db62520a2f03b985543c1a2df91760b617be/diff:/var/lib/docker/overlay2/8743e4a2f62d0b4d7d131f27b8ab14d6275204ab16f07549044818ea2ef91cea/diff:/var/lib/docker/overlay2/008f05bcc283b9d4492f78757715f81657b44cd5329d836d8d87067d6958f43a/diff:/var/lib/docker/overlay2/1f92040662498e9049ddb24ff6b7472c3fb47d8819226b89556a51b906ecd170/diff:/var/lib/d
ocker/overlay2/c1563ff023059aaacee9eefc7c1eed9ba16b25747bbdb8626371e3b6dd42ce11/diff:/var/lib/docker/overlay2/455e83cf21082e41accb0e919605ea47b4fe05ce653f50faae5c757e3d6cbf64/diff:/var/lib/docker/overlay2/264ed6f36ee2e48e54c0f7f4a13890f96421d3347c31961597e8b7483f8e0f98/diff:/var/lib/docker/overlay2/d715a2db6a3afc96c9e10fc741df34f4e9dde06e14bb7887d8fa12b67af0eaa9/diff:/var/lib/docker/overlay2/4f569f12545ea7ce6eeb31404f4255c44cc9b464ee63fc15474251eb86d98cee/diff:/var/lib/docker/overlay2/9934c2c689d1047f3e4d540b6a6b5d2ba34f2b85cd2f9c6176f7adca6360aab9/diff:/var/lib/docker/overlay2/d76e0cbc1b7baf6404c6077d4c903a5a4fcc4846840cfd1646cc78331cc092b4/diff:/var/lib/docker/overlay2/a6143d78c1dde4f438fd87f67c9f5fea09585e4b086acd45d24981f596a94bc3/diff:/var/lib/docker/overlay2/6cc78044264409c74c0b4c0c9c39daede8846ab1d73c4faa0e79c82b263eac2e/diff:/var/lib/docker/overlay2/ff1878038c477022daf6fea1648edea9cbdc6f6526860d9362f23497105dfeda/diff:/var/lib/docker/overlay2/c72badf903e72856a68f4f626defe17f6638fc76bd774750482eae7f46d
015ea/diff:/var/lib/docker/overlay2/e03e93d8d5c22e18f6ed5fa6e50efc1d6f3e1ec6f937d67a2b24c16d69b76a09/diff:/var/lib/docker/overlay2/7db41ccc63adc1cfb33539022e3744f651e9e40e12fc4042463bcd2290d1b1b1/diff:/var/lib/docker/overlay2/a13fef140530da8a24cdccb88d72678f269a44bb0cc5f296c6317b2415fe7801/diff:/var/lib/docker/overlay2/47723e734a521920ecfa6df4f32a286eb96c1d53dff52ba0af2d473f850ee8a0/diff:/var/lib/docker/overlay2/7117426bf3fd144bc4534ccc4b0358b9adb4e11733845990620cbbda661210d7/diff:/var/lib/docker/overlay2/4db9b623984c6e2a4a0a3d7f2d9b70e667e0eeff1f10359942a38dc2a82a2953/diff:/var/lib/docker/overlay2/2e75311da297566b0cc76cccfc738f5b2ccc35db30f4998c5dc1faaa18e94ed5/diff:/var/lib/docker/overlay2/9d203568b478a0d518d01c50ec8532e72438ddcb0984e45d0c10306ef71f2314/diff:/var/lib/docker/overlay2/8eb183aad13026c88b993b0004dc591ea5542085a0c0ea9eaa82a98aa31f3e8b/diff:/var/lib/docker/overlay2/6bbf259eb4862871f7a2e30bc085d2b7004c27644fa19af4a3679280676240d3/diff:/var/lib/docker/overlay2/f2b18eed216c511e28534f638460c078f3b627
9ed23222eb15a2f214202e9ae3/diff:/var/lib/docker/overlay2/8af1a74786eb08908e09b0a5b6c8d4f744680cb1712d3e2b2dec82b790e80bdf/diff:/var/lib/docker/overlay2/64c9df0e0bc93db79a5d7647a1eb9ff931c29be23788bb62fdcb669191cd6423/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9365a006ec489b1f533024223193c76559c09864ab8fbc53cc0198bfef68d9f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9365a006ec489b1f533024223193c76559c09864ab8fbc53cc0198bfef68d9f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9365a006ec489b1f533024223193c76559c09864ab8fbc53cc0198bfef68d9f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-20220125160520-11219",
	                "Source": "/var/lib/docker/volumes/functional-20220125160520-11219/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-20220125160520-11219",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-20220125160520-11219",
	                "name.minikube.sigs.k8s.io": "functional-20220125160520-11219",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d0cd4db3d67e45ffbb70fc0340f2a9cd8e1200039d095839041f29231c22adf3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61868"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61869"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61870"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61871"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61872"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d0cd4db3d67e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-20220125160520-11219": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1d09011e1335",
	                        "functional-20220125160520-11219"
	                    ],
	                    "NetworkID": "612a847e7349791ef1f71fae56c6ddcb1b207643eb2ce2143d71bde4b7597e3a",
	                    "EndpointID": "3b0ad559de286eb5bf2f811af74e4d77d05225706b59bf473d7d6173f6e075dd",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p functional-20220125160520-11219 -n functional-20220125160520-11219
helpers_test.go:245: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 logs -n 25: (2.426847267s)
helpers_test.go:253: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                        Args                        |             Profile             |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	| profile | list -o json --light                               | minikube                        | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:09:22 PST | Tue, 25 Jan 2022 16:09:22 PST |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:09:23 PST | Tue, 25 Jan 2022 16:09:23 PST |
	|         | ssh echo hello                                     |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:09:23 PST | Tue, 25 Jan 2022 16:09:24 PST |
	|         | ssh cat /etc/hostname                              |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:09:39 PST | Tue, 25 Jan 2022 16:09:39 PST |
	|         | addons list                                        |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:09:39 PST | Tue, 25 Jan 2022 16:09:39 PST |
	|         | addons list -o json                                |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:09:50 PST | Tue, 25 Jan 2022 16:09:50 PST |
	|         | service list                                       |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:09:52 PST | Tue, 25 Jan 2022 16:09:53 PST |
	|         | ssh findmnt -T /mount-9p | grep                    |                                 |         |         |                               |                               |
	|         | 9p                                                 |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:09:53 PST | Tue, 25 Jan 2022 16:09:53 PST |
	|         | ssh -- ls -la /mount-9p                            |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:09:53 PST | Tue, 25 Jan 2022 16:09:54 PST |
	|         | ssh cat                                            |                                 |         |         |                               |                               |
	|         | /mount-9p/test-1643155791501551000                 |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:09:58 PST | Tue, 25 Jan 2022 16:09:59 PST |
	|         | ssh stat                                           |                                 |         |         |                               |                               |
	|         | /mount-9p/created-by-test                          |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:09:59 PST | Tue, 25 Jan 2022 16:10:00 PST |
	|         | ssh stat                                           |                                 |         |         |                               |                               |
	|         | /mount-9p/created-by-pod                           |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:00 PST | Tue, 25 Jan 2022 16:10:00 PST |
	|         | ssh sudo umount -f /mount-9p                       |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:02 PST | Tue, 25 Jan 2022 16:10:02 PST |
	|         | ssh findmnt -T /mount-9p | grep                    |                                 |         |         |                               |                               |
	|         | 9p                                                 |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:03 PST | Tue, 25 Jan 2022 16:10:03 PST |
	|         | ssh -- ls -la /mount-9p                            |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:04 PST | Tue, 25 Jan 2022 16:10:04 PST |
	|         | version --short                                    |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:04 PST | Tue, 25 Jan 2022 16:10:05 PST |
	|         | version -o=json --components                       |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:05 PST | Tue, 25 Jan 2022 16:10:05 PST |
	|         | update-context                                     |                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=2                             |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:06 PST | Tue, 25 Jan 2022 16:10:06 PST |
	|         | update-context                                     |                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=2                             |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:06 PST | Tue, 25 Jan 2022 16:10:07 PST |
	|         | update-context                                     |                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=2                             |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:07 PST | Tue, 25 Jan 2022 16:10:07 PST |
	|         | image ls --format short                            |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:07 PST | Tue, 25 Jan 2022 16:10:08 PST |
	|         | image ls --format yaml                             |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219 image build -t     | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:08 PST | Tue, 25 Jan 2022 16:10:11 PST |
	|         | localhost/my-image:functional-20220125160520-11219 |                                 |         |         |                               |                               |
	|         | testdata/build                                     |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:11 PST | Tue, 25 Jan 2022 16:10:11 PST |
	|         | image ls                                           |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:11 PST | Tue, 25 Jan 2022 16:10:11 PST |
	|         | image ls --format json                             |                                 |         |         |                               |                               |
	| -p      | functional-20220125160520-11219                    | functional-20220125160520-11219 | jenkins | v1.25.1 | Tue, 25 Jan 2022 16:10:12 PST | Tue, 25 Jan 2022 16:10:12 PST |
	|         | image ls --format table                            |                                 |         |         |                               |                               |
	|---------|----------------------------------------------------|---------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/25 16:09:56
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0125 16:09:56.861268   14092 out.go:297] Setting OutFile to fd 1 ...
	I0125 16:09:56.861430   14092 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:09:56.861435   14092 out.go:310] Setting ErrFile to fd 2...
	I0125 16:09:56.861438   14092 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:09:56.861522   14092 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/bin
	I0125 16:09:56.861782   14092 out.go:304] Setting JSON to false
	I0125 16:09:56.888286   14092 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5971,"bootTime":1643149825,"procs":312,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0125 16:09:56.888408   14092 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0125 16:09:56.915697   14092 out.go:176] * [functional-20220125160520-11219] minikube v1.25.1 on Darwin 11.1
	I0125 16:09:56.942124   14092 out.go:176]   - MINIKUBE_LOCATION=13326
	I0125 16:09:56.968363   14092 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 16:09:56.994251   14092 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0125 16:09:57.036251   14092 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0125 16:09:57.080330   14092 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	I0125 16:09:57.080784   14092 config.go:176] Loaded profile config "functional-20220125160520-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 16:09:57.081123   14092 driver.go:344] Setting default libvirt URI to qemu:///system
	I0125 16:09:57.180020   14092 docker.go:132] docker version: linux-20.10.5
	I0125 16:09:57.180157   14092 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 16:09:57.345950   14092 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-26 00:09:57.302143255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 16:09:57.376939   14092 out.go:176] * Using the docker driver based on existing profile
	I0125 16:09:57.376954   14092 start.go:280] selected driver: docker
	I0125 16:09:57.376959   14092 start.go:795] validating driver "docker" against &{Name:functional-20220125160520-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220125160520-11219 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0125 16:09:57.377050   14092 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0125 16:09:57.377251   14092 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 16:09:57.536186   14092 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-26 00:09:57.489574351 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 16:09:57.538082   14092 cni.go:93] Creating CNI manager for ""
	I0125 16:09:57.538098   14092 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0125 16:09:57.538122   14092 start_flags.go:302] config:
	{Name:functional-20220125160520-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220125160520-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-glust
er:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-01-26 00:05:43 UTC, end at Wed 2022-01-26 00:14:59 UTC. --
	Jan 26 00:07:15 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:07:15.414515857Z" level=info msg="ignoring event" container=e95a62bc5a104c726f2c4ef5ab5f15b56adcc39d4244da2ea88ce884e054f128 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:07:53 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:07:53.375890670Z" level=info msg="ignoring event" container=007ac3070df6e973967abd44c25f4f86f6a85e6e062c91be79cdb27017a44b0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:07:53 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:07:53.529616041Z" level=info msg="ignoring event" container=d8f5cb06e0bbef6b1059dc924d298b3433bfe835d483664a8642cb13ce3b4157 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:07:58 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:07:58.632310444Z" level=info msg="ignoring event" container=b3e9cf61f09095fd62c8e501d98d3793320548465c5523fd51d1aff59975ffde module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:07:58 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:07:58.723352137Z" level=info msg="ignoring event" container=2942d0c32816833afdf63523353ac210423fd78ca38fe2390457fc1b6b1d4884 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:07:58 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:07:58.867581291Z" level=info msg="ignoring event" container=aae0f2ba02965e5f2b901465cdab215553410f8c4c1022dd73c4f6eed0968fe4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:07:58 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:07:58.959757956Z" level=info msg="ignoring event" container=8dedf9e68e3efc1c36d6ab0b18c55ed63bae70ebb9bd9b0060e218ab355793b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:07:59 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:07:59.072479245Z" level=info msg="ignoring event" container=14110b48dd74edb3b6cde20963e8e14ff4e904a683117c56a8ce40a0e5552002 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:08:09 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:08:09.156838598Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=55ab3174878b20101188b43dae1655e09293e4b78d97fc5c9e79d54a32fb0398
	Jan 26 00:08:09 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:08:09.188310275Z" level=info msg="ignoring event" container=55ab3174878b20101188b43dae1655e09293e4b78d97fc5c9e79d54a32fb0398 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:08:09 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:08:09.290889878Z" level=info msg="ignoring event" container=c3a5d57a2d4b96dd926c5deaced7666edfedfe4744b2eb62f16169766e562844 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:08:19 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:08:19.390676098Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=0f478f0877724d2dea0fcc46efe73c62de18262a3a9d750eef1be2cab469a51b
	Jan 26 00:08:19 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:08:19.472347306Z" level=info msg="ignoring event" container=0f478f0877724d2dea0fcc46efe73c62de18262a3a9d750eef1be2cab469a51b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:08:19 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:08:19.563649861Z" level=info msg="ignoring event" container=d1e5b3cf17ef9c5e8a552094ab4895d915d8685adce9ee26cb9520aa682c0142 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:08:19 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:08:19.655410777Z" level=info msg="ignoring event" container=e622edadb254cf3b71814b562f93ba4b699cee48de43470662752f5fb380bfdc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:08:19 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:08:19.747878290Z" level=info msg="ignoring event" container=6694c445c5448798b7c877085b5e6e463c0c8ad9ea2efd62081c1829f02ccc3c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:08:19 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:08:19.838147034Z" level=info msg="ignoring event" container=6a095a4a7145305bb9d0fb368467d9fe3b67ff7613aa79cadc4369bfff0bb77d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:08:50 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:08:50.318902911Z" level=info msg="ignoring event" container=eca19ec4e770f528929854e7633944a7fef48b6714fe5a4a48b7fadc3aa0189b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:08:50 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:08:50.360272195Z" level=info msg="ignoring event" container=5da53f3f60cad34bc24d88bfdb767beac0a4e3c3a6d622b52c1573734cc40e4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:09:45 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:09:45.874215340Z" level=info msg="ignoring event" container=79580a4d183f519311707fc1d09fd0080fc49799166a0a314bdc3c87329f04b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:09:45 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:09:45.914231283Z" level=info msg="ignoring event" container=a264db13df0c9e1427e3dbf16a7203867a68dc203694a1efa82a7989d74168e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:09:57 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:09:57.263846551Z" level=info msg="ignoring event" container=332007e467f30240a220650379601de4e75c1304b0e86ec3cdb7b1934480050b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:09:57 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:09:57.950465081Z" level=info msg="ignoring event" container=2f0c17bf04a2e13279bc02c867efd917355a742dbde4b345a2b77a9cdf58ab1c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:10:10 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:10:10.892275230Z" level=info msg="ignoring event" container=973fca18de812a2dddacb509eb0424e0cfeee859bc3bfca7c1c1e5cdc6562d6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 00:10:11 functional-20220125160520-11219 dockerd[468]: time="2022-01-26T00:10:11.006672867Z" level=info msg="Layer sha256:8d988d9cbd4c3812fb85f3c741a359985602af139e727005f4d4471ac42f9d1a cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID
	d2580dc21adf1       e1482a24335a6                                                                                         5 minutes ago       Running             kubernetes-dashboard        0                   484760670cd4d
	71d3b781f6bb6       7801cfc6d5c07                                                                                         5 minutes ago       Running             dashboard-metrics-scraper   0                   9a1e855f5fe78
	332007e467f30       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger                0                   2f0c17bf04a2e
	4036d7866d009       nginx@sha256:819e4be00b86634ce26b20f73e260e1ccf097e636b98e9728fac89fb15a52ca3                         5 minutes ago       Running             myfrontend                  0                   d591dfc7d87c1
	2930b8b6c8ccf       k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969         5 minutes ago       Running             echoserver                  0                   4b175ea3c3dc3
	11d4d58ffb114       nginx@sha256:da9c94bec1da829ebd52431a84502ec471c8e548ffb2cedbf36260fd9bd1d4d3                         5 minutes ago       Running             nginx                       0                   4a14c19cc8047
	6e222cb381b41       mysql@sha256:66480693e01295d85954bb5dbe2f41f29ebceb57d3d8098ea0c9d201473f2d8b                         5 minutes ago       Running             mysql                       0                   d1d83b0efebae
	18a87e13fcf2a       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner         0                   f00b4a24550ae
	45aa1c2cc9b04       a4ca41631cc7a                                                                                         6 minutes ago       Running             coredns                     0                   46819cfeda093
	2b04d0b4933c5       d922ca3da64b3                                                                                         6 minutes ago       Running             kube-proxy                  0                   44aab47bd988e
	544ce0172a889       6114d758d6d16                                                                                         6 minutes ago       Running             kube-scheduler              1                   a67e7d0b01a49
	a9e46576f8c18       8a0228dd6a683                                                                                         6 minutes ago       Running             kube-apiserver              0                   407c59f55a224
	0c40dfc6c253b       4783639ba7e03                                                                                         6 minutes ago       Running             kube-controller-manager     1                   50416bdd5a176
	80dd47fbc5ea7       25f8c7f3da61c                                                                                         6 minutes ago       Running             etcd                        1                   c90e9fcde66a9
	
	* 
	* ==> coredns [45aa1c2cc9b0] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = c23ed519c17e71ee396ed052e6209e94
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               functional-20220125160520-11219
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-20220125160520-11219
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f2b90e74c34b616e7f63aca230995ce4db99c965
	                    minikube.k8s.io/name=functional-20220125160520-11219
	                    minikube.k8s.io/updated_at=2022_01_25T16_08_31_0700
	                    minikube.k8s.io/version=v1.25.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Jan 2022 00:08:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-20220125160520-11219
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Jan 2022 00:14:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Jan 2022 00:10:34 +0000   Wed, 26 Jan 2022 00:08:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Jan 2022 00:10:34 +0000   Wed, 26 Jan 2022 00:08:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Jan 2022 00:10:34 +0000   Wed, 26 Jan 2022 00:08:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 26 Jan 2022 00:10:34 +0000   Wed, 26 Jan 2022 00:08:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-20220125160520-11219
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                40af86f8-cf04-4480-9fae-bd004b2289be
	  Boot ID:                    64eaa28b-2bea-4721-8bf9-d8b79f6942f4
	  Kernel Version:             5.10.25-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.12
	  Kubelet Version:            v1.23.2
	  Kube-Proxy Version:         v1.23.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-54fbb85-xfcs6                                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  default                     mysql-b87c45988-hdwpb                                      600m (10%!)(MISSING)    700m (11%!)(MISSING)  512Mi (8%!)(MISSING)       700Mi (11%!)(MISSING)    5m55s
	  default                     nginx-svc                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  default                     sp-pod                                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 coredns-64897985d-h98hc                                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     6m15s
	  kube-system                 etcd-functional-20220125160520-11219                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-apiserver-functional-20220125160520-11219             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-controller-manager-functional-20220125160520-11219    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 kube-proxy-rjxgd                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-scheduler-functional-20220125160520-11219             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 storage-provisioner                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-58549894f-v65bb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kubernetes-dashboard        kubernetes-dashboard-ccd587f44-czcnj                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (22%!)(MISSING)  700m (11%!)(MISSING)
	  memory             682Mi (11%!)(MISSING)  870Mi (14%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                                            Age    From        Message
	  ----     ------                                            ----   ----        -------
	  Normal   Starting                                          6m13s  kube-proxy  
	  Warning  listen tcp4 :31447: bind: address already in use  5m29s  kube-proxy  can't open port "nodePort for default/nginx-svc" (:31447/tcp4), skipping it
	  Warning  listen tcp4 :30010: bind: address already in use  5m14s  kube-proxy  can't open port "nodePort for default/hello-node" (:30010/tcp4), skipping it
	  Normal   NodeHasSufficientPID                              6m28s  kubelet     Node functional-20220125160520-11219 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced                           6m28s  kubelet     Updated Node Allocatable limit across pods
	  Normal   Starting                                          6m28s  kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory                           6m28s  kubelet     Node functional-20220125160520-11219 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure                             6m28s  kubelet     Node functional-20220125160520-11219 status is now: NodeHasNoDiskPressure
	  Normal   NodeReady                                         6m18s  kubelet     Node functional-20220125160520-11219 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.029588] bpfilter: write fail -32
	[  +0.034373] bpfilter: write fail -32
	[ +18.289622] bpfilter: read fail 0
	[  +0.032679] bpfilter: read fail 0
	[  +0.025301] bpfilter: write fail -32
	[ +11.587292] bpfilter: write fail -32
	[  +0.040784] bpfilter: write fail -32
	[Jan26 00:13] bpfilter: read fail 0
	[  +0.024883] bpfilter: write fail -32
	[  +0.026235] bpfilter: read fail 0
	[  +0.031133] bpfilter: read fail 0
	[ +18.273406] bpfilter: write fail -32
	[  +0.046133] bpfilter: write fail -32
	[ +11.601157] bpfilter: read fail 0
	[  +0.028862] bpfilter: read fail 0
	[  +0.035023] bpfilter: write fail -32
	[Jan26 00:14] bpfilter: read fail 0
	[  +0.034610] bpfilter: write fail -32
	[  +0.029938] bpfilter: write fail -32
	[ +18.289298] bpfilter: write fail -32
	[  +0.026350] bpfilter: read fail 0
	[  +0.028751] bpfilter: read fail 0
	[ +11.592022] bpfilter: read fail 0
	[  +0.021844] bpfilter: write fail -32
	[  +0.028772] bpfilter: write fail -32
	
	* 
	* ==> etcd [80dd47fbc5ea] <==
	* {"level":"info","ts":"2022-01-26T00:08:26.004Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-01-26T00:08:26.004Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-01-26T00:08:26.004Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-01-26T00:08:26.004Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-01-26T00:08:26.004Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-01-26T00:08:26.537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-01-26T00:08:26.537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-01-26T00:08:26.537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-01-26T00:08:26.537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-01-26T00:08:26.537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-01-26T00:08:26.537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-01-26T00:08:26.537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-01-26T00:08:26.537Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-20220125160520-11219 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-01-26T00:08:26.537Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-01-26T00:08:26.537Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-01-26T00:08:26.538Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-01-26T00:08:26.538Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-01-26T00:08:26.538Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-01-26T00:08:26.538Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-01-26T00:08:26.538Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-01-26T00:08:26.538Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-01-26T00:08:26.538Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-01-26T00:08:26.538Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"warn","ts":"2022-01-26T00:09:13.043Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"153.385424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-01-26T00:09:13.043Z","caller":"traceutil/trace.go:171","msg":"trace[74622875] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:519; }","duration":"153.499689ms","start":"2022-01-26T00:09:12.889Z","end":"2022-01-26T00:09:13.043Z","steps":["trace[74622875] 'agreement among raft nodes before linearized reading'  (duration: 46.649094ms)","trace[74622875] 'range keys from in-memory index tree'  (duration: 106.720022ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  00:15:00 up 16 min,  0 users,  load average: 0.11, 0.77, 0.91
	Linux functional-20220125160520-11219 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [a9e46576f8c1] <==
	* I0126 00:08:28.406144       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0126 00:08:29.287115       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0126 00:08:29.287188       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0126 00:08:29.291662       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0126 00:08:29.293926       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0126 00:08:29.293936       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0126 00:08:29.619797       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0126 00:08:29.645751       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0126 00:08:29.735100       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0126 00:08:29.738804       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0126 00:08:29.739384       1 controller.go:611] quota admission added evaluator for: endpoints
	I0126 00:08:29.742328       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0126 00:08:30.416354       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0126 00:08:31.075104       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0126 00:08:31.081374       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0126 00:08:31.106582       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0126 00:08:31.329470       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0126 00:08:44.002724       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0126 00:08:44.050395       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0126 00:08:46.148917       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0126 00:09:04.901356       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs=map[IPv4:10.101.122.40]
	I0126 00:09:24.552886       1 alloc.go:329] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.101.40.117]
	I0126 00:09:40.017121       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.101.121.31]
	I0126 00:09:58.887575       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.107.180.167]
	I0126 00:09:58.966691       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.109.194.212]
	
	* 
	* ==> kube-controller-manager [0c40dfc6c253] <==
	* I0126 00:09:04.918610       1 event.go:294] "Event occurred" object="default/mysql" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set mysql-b87c45988 to 1"
	I0126 00:09:04.930469       1 event.go:294] "Event occurred" object="default/mysql-b87c45988" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mysql-b87c45988-hdwpb"
	I0126 00:09:33.431677       1 event.go:294] "Event occurred" object="default/myclaim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0126 00:09:39.948967       1 event.go:294] "Event occurred" object="default/hello-node" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-54fbb85 to 1"
	I0126 00:09:39.955635       1 event.go:294] "Event occurred" object="default/hello-node-54fbb85" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-54fbb85-xfcs6"
	I0126 00:09:58.775565       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-58549894f to 1"
	I0126 00:09:58.781879       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0126 00:09:58.785138       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-ccd587f44 to 1"
	E0126 00:09:58.786667       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0126 00:09:58.790978       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0126 00:09:58.792242       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0126 00:09:58.792700       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0126 00:09:58.796324       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0126 00:09:58.797718       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0126 00:09:58.797796       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0126 00:09:58.836057       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0126 00:09:58.836172       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0126 00:09:58.843375       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-58549894f" failed with pods "dashboard-metrics-scraper-58549894f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0126 00:09:58.843467       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0126 00:09:58.843866       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-58549894f-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0126 00:09:58.843908       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0126 00:09:58.850812       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-ccd587f44" failed with pods "kubernetes-dashboard-ccd587f44-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0126 00:09:58.850872       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-ccd587f44-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0126 00:09:58.887782       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-58549894f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-58549894f-v65bb"
	I0126 00:09:58.948589       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-ccd587f44" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-ccd587f44-czcnj"
	
	* 
	* ==> kube-proxy [2b04d0b4933c] <==
	* I0126 00:08:44.783301       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0126 00:08:44.783394       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0126 00:08:44.783419       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0126 00:08:46.145889       1 server_others.go:206] "Using iptables Proxier"
	I0126 00:08:46.146010       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0126 00:08:46.146035       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0126 00:08:46.146062       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0126 00:08:46.146527       1 server.go:656] "Version info" version="v1.23.2"
	I0126 00:08:46.147252       1 config.go:226] "Starting endpoint slice config controller"
	I0126 00:08:46.147287       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0126 00:08:46.147406       1 config.go:317] "Starting service config controller"
	I0126 00:08:46.147437       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0126 00:08:46.248002       1 shared_informer.go:247] Caches are synced for service config 
	I0126 00:08:46.248094       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	E0126 00:09:30.793885       1 proxier.go:1600] "can't open port, skipping it" err="listen tcp4 :31447: bind: address already in use" port={Description:nodePort for default/nginx-svc IP: IPFamily:4 Port:31447 Protocol:TCP}
	E0126 00:09:44.977888       1 proxier.go:1600] "can't open port, skipping it" err="listen tcp4 :30010: bind: address already in use" port={Description:nodePort for default/hello-node IP: IPFamily:4 Port:30010 Protocol:TCP}
	
	* 
	* ==> kube-scheduler [544ce0172a88] <==
	* W0126 00:08:28.323739       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0126 00:08:28.323789       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0126 00:08:28.323840       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0126 00:08:28.323921       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0126 00:08:28.323750       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0126 00:08:28.323966       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0126 00:08:29.188953       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0126 00:08:29.188974       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0126 00:08:29.224370       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0126 00:08:29.224495       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0126 00:08:29.289076       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0126 00:08:29.289157       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0126 00:08:29.326051       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0126 00:08:29.326089       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0126 00:08:29.393991       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0126 00:08:29.394055       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0126 00:08:29.415133       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0126 00:08:29.415166       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0126 00:08:29.425002       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0126 00:08:29.425048       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0126 00:08:29.505275       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0126 00:08:29.505311       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0126 00:08:29.537076       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0126 00:08:29.537110       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0126 00:08:31.221596       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-01-26 00:05:43 UTC, end at Wed 2022-01-26 00:15:00 UTC. --
	Jan 26 00:09:54 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:54.775956    6235 topology_manager.go:200] "Topology Admit Handler"
	Jan 26 00:09:54 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:54.922549    6235 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c848720a-b1cd-4bed-b001-86e68662a8a5-test-volume\") pod \"busybox-mount\" (UID: \"c848720a-b1cd-4bed-b001-86e68662a8a5\") " pod="default/busybox-mount"
	Jan 26 00:09:54 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:54.922608    6235 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgzbc\" (UniqueName: \"kubernetes.io/projected/c848720a-b1cd-4bed-b001-86e68662a8a5-kube-api-access-zgzbc\") pod \"busybox-mount\" (UID: \"c848720a-b1cd-4bed-b001-86e68662a8a5\") " pod="default/busybox-mount"
	Jan 26 00:09:55 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:55.337602    6235 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/busybox-mount through plugin: invalid network status for"
	Jan 26 00:09:55 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:55.893403    6235 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/busybox-mount through plugin: invalid network status for"
	Jan 26 00:09:57 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:57.909366    6235 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/busybox-mount through plugin: invalid network status for"
	Jan 26 00:09:57 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:57.911468    6235 scope.go:110] "RemoveContainer" containerID="332007e467f30240a220650379601de4e75c1304b0e86ec3cdb7b1934480050b"
	Jan 26 00:09:58 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:58.938242    6235 topology_manager.go:200] "Topology Admit Handler"
	Jan 26 00:09:58 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:58.952220    6235 topology_manager.go:200] "Topology Admit Handler"
	Jan 26 00:09:58 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:58.954120    6235 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2f0c17bf04a2e13279bc02c867efd917355a742dbde4b345a2b77a9cdf58ab1c"
	Jan 26 00:09:59 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:59.056404    6235 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b0a94bff-7d34-4abc-a598-c5c57db92225-tmp-volume\") pod \"dashboard-metrics-scraper-58549894f-v65bb\" (UID: \"b0a94bff-7d34-4abc-a598-c5c57db92225\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-58549894f-v65bb"
	Jan 26 00:09:59 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:59.056553    6235 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mmxd\" (UniqueName: \"kubernetes.io/projected/b0a94bff-7d34-4abc-a598-c5c57db92225-kube-api-access-9mmxd\") pod \"dashboard-metrics-scraper-58549894f-v65bb\" (UID: \"b0a94bff-7d34-4abc-a598-c5c57db92225\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-58549894f-v65bb"
	Jan 26 00:09:59 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:59.056663    6235 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6nvt\" (UniqueName: \"kubernetes.io/projected/b96f9ec0-c7bc-4bc7-ae72-29c51bdb215a-kube-api-access-b6nvt\") pod \"kubernetes-dashboard-ccd587f44-czcnj\" (UID: \"b96f9ec0-c7bc-4bc7-ae72-29c51bdb215a\") " pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-czcnj"
	Jan 26 00:09:59 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:59.056939    6235 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b96f9ec0-c7bc-4bc7-ae72-29c51bdb215a-tmp-volume\") pod \"kubernetes-dashboard-ccd587f44-czcnj\" (UID: \"b96f9ec0-c7bc-4bc7-ae72-29c51bdb215a\") " pod="kubernetes-dashboard/kubernetes-dashboard-ccd587f44-czcnj"
	Jan 26 00:09:59 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:59.541808    6235 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-v65bb through plugin: invalid network status for"
	Jan 26 00:09:59 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:59.656387    6235 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-ccd587f44-czcnj through plugin: invalid network status for"
	Jan 26 00:09:59 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:59.963774    6235 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-58549894f-v65bb through plugin: invalid network status for"
	Jan 26 00:09:59 functional-20220125160520-11219 kubelet[6235]: I0126 00:09:59.968677    6235 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-ccd587f44-czcnj through plugin: invalid network status for"
	Jan 26 00:10:00 functional-20220125160520-11219 kubelet[6235]: I0126 00:10:00.163662    6235 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgzbc\" (UniqueName: \"kubernetes.io/projected/c848720a-b1cd-4bed-b001-86e68662a8a5-kube-api-access-zgzbc\") pod \"c848720a-b1cd-4bed-b001-86e68662a8a5\" (UID: \"c848720a-b1cd-4bed-b001-86e68662a8a5\") "
	Jan 26 00:10:00 functional-20220125160520-11219 kubelet[6235]: I0126 00:10:00.163881    6235 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c848720a-b1cd-4bed-b001-86e68662a8a5-test-volume\") pod \"c848720a-b1cd-4bed-b001-86e68662a8a5\" (UID: \"c848720a-b1cd-4bed-b001-86e68662a8a5\") "
	Jan 26 00:10:00 functional-20220125160520-11219 kubelet[6235]: I0126 00:10:00.163943    6235 operation_generator.go:909] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c848720a-b1cd-4bed-b001-86e68662a8a5-test-volume" (OuterVolumeSpecName: "test-volume") pod "c848720a-b1cd-4bed-b001-86e68662a8a5" (UID: "c848720a-b1cd-4bed-b001-86e68662a8a5"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 26 00:10:00 functional-20220125160520-11219 kubelet[6235]: I0126 00:10:00.165831    6235 operation_generator.go:909] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c848720a-b1cd-4bed-b001-86e68662a8a5-kube-api-access-zgzbc" (OuterVolumeSpecName: "kube-api-access-zgzbc") pod "c848720a-b1cd-4bed-b001-86e68662a8a5" (UID: "c848720a-b1cd-4bed-b001-86e68662a8a5"). InnerVolumeSpecName "kube-api-access-zgzbc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 26 00:10:00 functional-20220125160520-11219 kubelet[6235]: I0126 00:10:00.264625    6235 reconciler.go:295] "Volume detached for volume \"kube-api-access-zgzbc\" (UniqueName: \"kubernetes.io/projected/c848720a-b1cd-4bed-b001-86e68662a8a5-kube-api-access-zgzbc\") on node \"functional-20220125160520-11219\" DevicePath \"\""
	Jan 26 00:10:00 functional-20220125160520-11219 kubelet[6235]: I0126 00:10:00.264670    6235 reconciler.go:295] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/c848720a-b1cd-4bed-b001-86e68662a8a5-test-volume\") on node \"functional-20220125160520-11219\" DevicePath \"\""
	Jan 26 00:13:31 functional-20220125160520-11219 kubelet[6235]: W0126 00:13:31.415137    6235 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	
	* 
	* ==> kubernetes-dashboard [d2580dc21adf] <==
	* 2022/01/26 00:09:59 Using namespace: kubernetes-dashboard
	2022/01/26 00:09:59 Using in-cluster config to connect to apiserver
	2022/01/26 00:09:59 Using secret token for csrf signing
	2022/01/26 00:09:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2022/01/26 00:09:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2022/01/26 00:09:59 Successful initial request to the apiserver, version: v1.23.2
	2022/01/26 00:09:59 Generating JWE encryption key
	2022/01/26 00:09:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2022/01/26 00:09:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2022/01/26 00:10:00 Initializing JWE encryption key from synchronized object
	2022/01/26 00:10:00 Creating in-cluster Sidecar client
	2022/01/26 00:10:00 Successful request to sidecar
	2022/01/26 00:10:00 Serving insecurely on HTTP port: 9090
	2022/01/26 00:09:59 Starting overwatch
	
	* 
	* ==> storage-provisioner [18a87e13fcf2] <==
	* I0126 00:08:46.612140       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0126 00:08:46.620087       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0126 00:08:46.620130       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0126 00:08:46.628250       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0126 00:08:46.628348       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-20220125160520-11219_c2ff4d3e-6e9c-42e1-86a7-c010871eaf9e!
	I0126 00:08:46.628373       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bea99743-0232-48ca-b241-9fc55db8ef26", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20220125160520-11219_c2ff4d3e-6e9c-42e1-86a7-c010871eaf9e became leader
	I0126 00:08:46.729472       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-20220125160520-11219_c2ff4d3e-6e9c-42e1-86a7-c010871eaf9e!
	I0126 00:09:33.431624       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0126 00:09:33.432119       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"e6f8906e-e7af-4cbc-bc49-666bff721bcf", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0126 00:09:33.431718       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    81c6e806-5a02-465a-b75f-8067cc690ef3 445 0 2022-01-26 00:08:45 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2022-01-26 00:08:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-e6f8906e-e7af-4cbc-bc49-666bff721bcf &PersistentVolumeClaim{ObjectMeta:{myclaim  default  e6f8906e-e7af-4cbc-bc49-666bff721bcf 563 0 2022-01-26 00:09:33 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2022-01-26 00:09:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2022-01-26 00:09:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0126 00:09:33.434237       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-e6f8906e-e7af-4cbc-bc49-666bff721bcf" provisioned
	I0126 00:09:33.434310       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0126 00:09:33.434316       1 volume_store.go:212] Trying to save persistentvolume "pvc-e6f8906e-e7af-4cbc-bc49-666bff721bcf"
	I0126 00:09:33.440964       1 volume_store.go:219] persistentvolume "pvc-e6f8906e-e7af-4cbc-bc49-666bff721bcf" saved
	I0126 00:09:33.441325       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"e6f8906e-e7af-4cbc-bc49-666bff721bcf", APIVersion:"v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-e6f8906e-e7af-4cbc-bc49-666bff721bcf
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p functional-20220125160520-11219 -n functional-20220125160520-11219
helpers_test.go:262: (dbg) Run:  kubectl --context functional-20220125160520-11219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:271: non-running pods: busybox-mount
helpers_test.go:273: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context functional-20220125160520-11219 describe pod busybox-mount
helpers_test.go:281: (dbg) kubectl --context functional-20220125160520-11219 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:         busybox-mount
	Namespace:    default
	Priority:     0
	Node:         functional-20220125160520-11219/192.168.49.2
	Start Time:   Tue, 25 Jan 2022 16:09:54 -0800
	Labels:       integration-test=busybox-mount
	Annotations:  <none>
	Status:       Succeeded
	IP:           172.17.0.7
	IPs:
	  IP:  172.17.0.7
	Containers:
	  mount-munger:
	    Container ID:  docker://332007e467f30240a220650379601de4e75c1304b0e86ec3cdb7b1934480050b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 25 Jan 2022 16:09:57 -0800
	      Finished:     Tue, 25 Jan 2022 16:09:57 -0800
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zgzbc (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zgzbc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m7s  default-scheduler  Successfully assigned default/busybox-mount to functional-20220125160520-11219
	  Normal  Pulling    5m6s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m4s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.718791053s
	  Normal  Created    5m4s  kubelet            Created container mount-munger
	  Normal  Started    5m4s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:284: <<< TestFunctional/parallel/DashboardCmd FAILED: end of post-mortem logs <<<
helpers_test.go:285: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/DashboardCmd (304.03s)

                                                
                                    
x
+
TestSkaffold (96.08s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2872605338 version
skaffold_test.go:61: skaffold version: v1.35.2
skaffold_test.go:64: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-20220125165055-11219 --memory=2600 --driver=docker 
E0125 16:51:53.925771   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:52:07.971952   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
skaffold_test.go:64: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-20220125165055-11219 --memory=2600 --driver=docker : (1m13.95907171s)
skaffold_test.go:84: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:108: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2872605338 run --minikube-profile skaffold-20220125165055-11219 --kube-context skaffold-20220125165055-11219 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:108: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2872605338 run --minikube-profile skaffold-20220125165055-11219 --kube-context skaffold-20220125165055-11219 --status-check=true --port-forward=false --interactive=false: exit status 1 (2.621374644s)

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Error checking cache.

                                                
                                                
-- /stdout --
** stderr ** 
	getting hash for artifact "leeroy-web": getting dependencies for "leeroy-web": parsing ONBUILD instructions: retrieving image "golang:1.12.9-alpine3.10": GET https://index.docker.io/v2/library/golang/manifests/1.12.9-alpine3.10: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
skaffold_test.go:110: error running skaffold: exit status 1

                                                
                                                
-- stdout --
	Generating tags...
	 - leeroy-web -> leeroy-web:latest
	 - leeroy-app -> leeroy-app:latest
	Some taggers failed. Rerun with -vdebug for errors.
	Checking cache...
	 - leeroy-web: Error checking cache.

                                                
                                                
-- /stdout --
** stderr ** 
	getting hash for artifact "leeroy-web": getting dependencies for "leeroy-web": parsing ONBUILD instructions: retrieving image "golang:1.12.9-alpine3.10": GET https://index.docker.io/v2/library/golang/manifests/1.12.9-alpine3.10: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
panic.go:642: *** TestSkaffold FAILED at 2022-01-25 16:52:13.351016 -0800 PST m=+3224.029879089
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestSkaffold]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect skaffold-20220125165055-11219
helpers_test.go:236: (dbg) docker inspect skaffold-20220125165055-11219:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6d35c7f558cf495a3df2fe404f98a902ff754f62736659107112be1cec7d80fc",
	        "Created": "2022-01-26T00:51:09.950140045Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 201118,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-01-26T00:51:18.476520249Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:64d09634c60d2a75596bc705aa84bdc41f76fe47c5d9ee362550bffbdc256979",
	        "ResolvConfPath": "/var/lib/docker/containers/6d35c7f558cf495a3df2fe404f98a902ff754f62736659107112be1cec7d80fc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6d35c7f558cf495a3df2fe404f98a902ff754f62736659107112be1cec7d80fc/hostname",
	        "HostsPath": "/var/lib/docker/containers/6d35c7f558cf495a3df2fe404f98a902ff754f62736659107112be1cec7d80fc/hosts",
	        "LogPath": "/var/lib/docker/containers/6d35c7f558cf495a3df2fe404f98a902ff754f62736659107112be1cec7d80fc/6d35c7f558cf495a3df2fe404f98a902ff754f62736659107112be1cec7d80fc-json.log",
	        "Name": "/skaffold-20220125165055-11219",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "skaffold-20220125165055-11219:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "skaffold-20220125165055-11219",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2726297600,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [
	                {
	                    "PathOnHost": "/dev/fuse",
	                    "PathInContainer": "/dev/fuse",
	                    "CgroupPermissions": "rwm"
	                }
	            ],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2726297600,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6967fce574604abfdfcf77ffa86a219c54a3dfa2895587fb50bad27bf73f0576-init/diff:/var/lib/docker/overlay2/f6644e174c834e004f7c6a7fd84cb30249e653cc28149565c547eb0aa32b2b22/diff:/var/lib/docker/overlay2/968f387d844b747986ea23bbefc5cc36d27f1844123d64b03096b498a18227f0/diff:/var/lib/docker/overlay2/3203a36297bf74d3eefe39969197da20131c1144329e83fbc94f578c7183d9f4/diff:/var/lib/docker/overlay2/8e7880efcc82627322c02b71747b68679f667fb86571fe7533cf6a3d4b633356/diff:/var/lib/docker/overlay2/8795341a48190b7df6b12df8a17a4a2f2191bf7590cfa3f6470735930a3b667a/diff:/var/lib/docker/overlay2/02ea8e3eccc1e90d73656ef34f9421af3e293a438eb7e1905e2f87b778316786/diff:/var/lib/docker/overlay2/2aadeb562c286a175a060b66c8bc81d30d2b6de6c626522114af1e85ee2f3aee/diff:/var/lib/docker/overlay2/afe0fbc73729589e58d48d713b0aeb040b2d44035ffdee605a8e7e2babe0a2c3/diff:/var/lib/docker/overlay2/798e8675f5274e1f3c053597604a9d0636b6d456064cc94af9e1a5043428130e/diff:/var/lib/docker/overlay2/36d3ce
8c4bd049112c0fcfddc814a9c3bb9209c8edfd0584c32abd8768dfb7c6/diff:/var/lib/docker/overlay2/57011940c99ff9806222058d5c6c6aa07b2828eebc9a73f8a2b8f0157fb66f79/diff:/var/lib/docker/overlay2/452cf0420b92814a98512be6c74ad70cceacc404351244100b022f166a0b6a40/diff:/var/lib/docker/overlay2/49f36a3896f3e0c99ca1bb0a62416d83f28c2cdfcbc8ac9073f5fe2c6fe67a7d/diff:/var/lib/docker/overlay2/6b20e443c8c5775f2cd9cb99cb2b7d0254ff94ae51a6201094569a66fda56064/diff:/var/lib/docker/overlay2/6cc21e5ad7dfd64f6833e2c9278ad90447e59a0d58e54d1609ef3b05a86327a5/diff:/var/lib/docker/overlay2/d8027491b5695727611fc0fd4362991c33b9c494d093578d3293a874b11b70b0/diff:/var/lib/docker/overlay2/71565a6201e5f60cbbb031d8ad83db62520a2f03b985543c1a2df91760b617be/diff:/var/lib/docker/overlay2/8743e4a2f62d0b4d7d131f27b8ab14d6275204ab16f07549044818ea2ef91cea/diff:/var/lib/docker/overlay2/008f05bcc283b9d4492f78757715f81657b44cd5329d836d8d87067d6958f43a/diff:/var/lib/docker/overlay2/1f92040662498e9049ddb24ff6b7472c3fb47d8819226b89556a51b906ecd170/diff:/var/lib/d
ocker/overlay2/c1563ff023059aaacee9eefc7c1eed9ba16b25747bbdb8626371e3b6dd42ce11/diff:/var/lib/docker/overlay2/455e83cf21082e41accb0e919605ea47b4fe05ce653f50faae5c757e3d6cbf64/diff:/var/lib/docker/overlay2/264ed6f36ee2e48e54c0f7f4a13890f96421d3347c31961597e8b7483f8e0f98/diff:/var/lib/docker/overlay2/d715a2db6a3afc96c9e10fc741df34f4e9dde06e14bb7887d8fa12b67af0eaa9/diff:/var/lib/docker/overlay2/4f569f12545ea7ce6eeb31404f4255c44cc9b464ee63fc15474251eb86d98cee/diff:/var/lib/docker/overlay2/9934c2c689d1047f3e4d540b6a6b5d2ba34f2b85cd2f9c6176f7adca6360aab9/diff:/var/lib/docker/overlay2/d76e0cbc1b7baf6404c6077d4c903a5a4fcc4846840cfd1646cc78331cc092b4/diff:/var/lib/docker/overlay2/a6143d78c1dde4f438fd87f67c9f5fea09585e4b086acd45d24981f596a94bc3/diff:/var/lib/docker/overlay2/6cc78044264409c74c0b4c0c9c39daede8846ab1d73c4faa0e79c82b263eac2e/diff:/var/lib/docker/overlay2/ff1878038c477022daf6fea1648edea9cbdc6f6526860d9362f23497105dfeda/diff:/var/lib/docker/overlay2/c72badf903e72856a68f4f626defe17f6638fc76bd774750482eae7f46d
015ea/diff:/var/lib/docker/overlay2/e03e93d8d5c22e18f6ed5fa6e50efc1d6f3e1ec6f937d67a2b24c16d69b76a09/diff:/var/lib/docker/overlay2/7db41ccc63adc1cfb33539022e3744f651e9e40e12fc4042463bcd2290d1b1b1/diff:/var/lib/docker/overlay2/a13fef140530da8a24cdccb88d72678f269a44bb0cc5f296c6317b2415fe7801/diff:/var/lib/docker/overlay2/47723e734a521920ecfa6df4f32a286eb96c1d53dff52ba0af2d473f850ee8a0/diff:/var/lib/docker/overlay2/7117426bf3fd144bc4534ccc4b0358b9adb4e11733845990620cbbda661210d7/diff:/var/lib/docker/overlay2/4db9b623984c6e2a4a0a3d7f2d9b70e667e0eeff1f10359942a38dc2a82a2953/diff:/var/lib/docker/overlay2/2e75311da297566b0cc76cccfc738f5b2ccc35db30f4998c5dc1faaa18e94ed5/diff:/var/lib/docker/overlay2/9d203568b478a0d518d01c50ec8532e72438ddcb0984e45d0c10306ef71f2314/diff:/var/lib/docker/overlay2/8eb183aad13026c88b993b0004dc591ea5542085a0c0ea9eaa82a98aa31f3e8b/diff:/var/lib/docker/overlay2/6bbf259eb4862871f7a2e30bc085d2b7004c27644fa19af4a3679280676240d3/diff:/var/lib/docker/overlay2/f2b18eed216c511e28534f638460c078f3b627
9ed23222eb15a2f214202e9ae3/diff:/var/lib/docker/overlay2/8af1a74786eb08908e09b0a5b6c8d4f744680cb1712d3e2b2dec82b790e80bdf/diff:/var/lib/docker/overlay2/64c9df0e0bc93db79a5d7647a1eb9ff931c29be23788bb62fdcb669191cd6423/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6967fce574604abfdfcf77ffa86a219c54a3dfa2895587fb50bad27bf73f0576/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6967fce574604abfdfcf77ffa86a219c54a3dfa2895587fb50bad27bf73f0576/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6967fce574604abfdfcf77ffa86a219c54a3dfa2895587fb50bad27bf73f0576/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "skaffold-20220125165055-11219",
	                "Source": "/var/lib/docker/volumes/skaffold-20220125165055-11219/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "skaffold-20220125165055-11219",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "skaffold-20220125165055-11219",
	                "name.minikube.sigs.k8s.io": "skaffold-20220125165055-11219",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "360436f09229869c9ef1109a5e324646d7b76714bbcca2e8bd87a5915f303727",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59742"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59743"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59744"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59745"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59746"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/360436f09229",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "skaffold-20220125165055-11219": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6d35c7f558cf",
	                        "skaffold-20220125165055-11219"
	                    ],
	                    "NetworkID": "1daaa85f783be9adc1cb79944fe57cd9da9611c9effa10c1bdfb11ab853f405b",
	                    "EndpointID": "706eb1a1bc90354b1b1972987dd2a8318249429a9503e797ceb1e27803efe0f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p skaffold-20220125165055-11219 -n skaffold-20220125165055-11219
helpers_test.go:245: <<< TestSkaffold FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestSkaffold]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p skaffold-20220125165055-11219 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p skaffold-20220125165055-11219 logs -n 25: (1.906670136s)
helpers_test.go:253: TestSkaffold logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|-----------------------------------------------------------------------------------------------------------------------------------|-------------------------------------|----------|---------|-------------------------------|-------------------------------|
	|  Command   |                                                               Args                                                                |               Profile               |   User   | Version |          Start Time           |           End Time            |
	|------------|-----------------------------------------------------------------------------------------------------------------------------------|-------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| -p         | multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219 sudo cat                                                     | multinode-20220125162801-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:34:12 PST | Tue, 25 Jan 2022 16:34:13 PST |
	|            | /home/docker/cp-test_multinode-20220125162801-11219-m03_multinode-20220125162801-11219.txt                                        |                                     |          |         |                               |                               |
	| -p         | multinode-20220125162801-11219 cp multinode-20220125162801-11219-m03:/home/docker/cp-test.txt                                     | multinode-20220125162801-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:34:13 PST | Tue, 25 Jan 2022 16:34:14 PST |
	|            | multinode-20220125162801-11219-m02:/home/docker/cp-test_multinode-20220125162801-11219-m03_multinode-20220125162801-11219-m02.txt |                                     |          |         |                               |                               |
	| -p         | multinode-20220125162801-11219                                                                                                    | multinode-20220125162801-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:34:14 PST | Tue, 25 Jan 2022 16:34:14 PST |
	|            | ssh -n                                                                                                                            |                                     |          |         |                               |                               |
	|            | multinode-20220125162801-11219-m03                                                                                                |                                     |          |         |                               |                               |
	|            | sudo cat /home/docker/cp-test.txt                                                                                                 |                                     |          |         |                               |                               |
	| -p         | multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m02 sudo cat                                                 | multinode-20220125162801-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:34:14 PST | Tue, 25 Jan 2022 16:34:15 PST |
	|            | /home/docker/cp-test_multinode-20220125162801-11219-m03_multinode-20220125162801-11219-m02.txt                                    |                                     |          |         |                               |                               |
	| -p         | multinode-20220125162801-11219                                                                                                    | multinode-20220125162801-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:34:15 PST | Tue, 25 Jan 2022 16:34:23 PST |
	|            | node stop m03                                                                                                                     |                                     |          |         |                               |                               |
	| -p         | multinode-20220125162801-11219                                                                                                    | multinode-20220125162801-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:34:26 PST | Tue, 25 Jan 2022 16:35:16 PST |
	|            | node start m03                                                                                                                    |                                     |          |         |                               |                               |
	|            | --alsologtostderr                                                                                                                 |                                     |          |         |                               |                               |
	| stop       | -p                                                                                                                                | multinode-20220125162801-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:35:18 PST | Tue, 25 Jan 2022 16:35:58 PST |
	|            | multinode-20220125162801-11219                                                                                                    |                                     |          |         |                               |                               |
	| start      | -p                                                                                                                                | multinode-20220125162801-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:35:59 PST | Tue, 25 Jan 2022 16:39:32 PST |
	|            | multinode-20220125162801-11219                                                                                                    |                                     |          |         |                               |                               |
	|            | --wait=true -v=8                                                                                                                  |                                     |          |         |                               |                               |
	|            | --alsologtostderr                                                                                                                 |                                     |          |         |                               |                               |
	| -p         | multinode-20220125162801-11219                                                                                                    | multinode-20220125162801-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:39:32 PST | Tue, 25 Jan 2022 16:39:44 PST |
	|            | node delete m03                                                                                                                   |                                     |          |         |                               |                               |
	| -p         | multinode-20220125162801-11219                                                                                                    | multinode-20220125162801-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:39:47 PST | Tue, 25 Jan 2022 16:40:11 PST |
	|            | stop                                                                                                                              |                                     |          |         |                               |                               |
	| start      | -p                                                                                                                                | multinode-20220125162801-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:40:11 PST | Tue, 25 Jan 2022 16:42:41 PST |
	|            | multinode-20220125162801-11219                                                                                                    |                                     |          |         |                               |                               |
	|            | --wait=true -v=8                                                                                                                  |                                     |          |         |                               |                               |
	|            | --alsologtostderr                                                                                                                 |                                     |          |         |                               |                               |
	|            | --driver=docker                                                                                                                   |                                     |          |         |                               |                               |
	| start      | -p                                                                                                                                | multinode-20220125162801-11219-m03  | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:42:44 PST | Tue, 25 Jan 2022 16:44:07 PST |
	|            | multinode-20220125162801-11219-m03                                                                                                |                                     |          |         |                               |                               |
	|            | --driver=docker                                                                                                                   |                                     |          |         |                               |                               |
	| delete     | -p                                                                                                                                | multinode-20220125162801-11219-m03  | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:44:08 PST | Tue, 25 Jan 2022 16:44:24 PST |
	|            | multinode-20220125162801-11219-m03                                                                                                |                                     |          |         |                               |                               |
	| delete     | -p                                                                                                                                | multinode-20220125162801-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:44:24 PST | Tue, 25 Jan 2022 16:44:45 PST |
	|            | multinode-20220125162801-11219                                                                                                    |                                     |          |         |                               |                               |
	| start      | -p                                                                                                                                | test-preload-20220125164445-11219   | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:44:45 PST | Tue, 25 Jan 2022 16:47:05 PST |
	|            | test-preload-20220125164445-11219                                                                                                 |                                     |          |         |                               |                               |
	|            | --memory=2200 --alsologtostderr                                                                                                   |                                     |          |         |                               |                               |
	|            | --wait=true --preload=false                                                                                                       |                                     |          |         |                               |                               |
	|            | --driver=docker                                                                                                                   |                                     |          |         |                               |                               |
	|            | --kubernetes-version=v1.17.0                                                                                                      |                                     |          |         |                               |                               |
	| ssh        | -p                                                                                                                                | test-preload-20220125164445-11219   | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:47:05 PST | Tue, 25 Jan 2022 16:47:07 PST |
	|            | test-preload-20220125164445-11219                                                                                                 |                                     |          |         |                               |                               |
	|            | -- docker pull                                                                                                                    |                                     |          |         |                               |                               |
	|            | gcr.io/k8s-minikube/busybox                                                                                                       |                                     |          |         |                               |                               |
	| start      | -p                                                                                                                                | test-preload-20220125164445-11219   | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:47:08 PST | Tue, 25 Jan 2022 16:48:09 PST |
	|            | test-preload-20220125164445-11219                                                                                                 |                                     |          |         |                               |                               |
	|            | --memory=2200 --alsologtostderr                                                                                                   |                                     |          |         |                               |                               |
	|            | -v=1 --wait=true --driver=docker                                                                                                  |                                     |          |         |                               |                               |
	|            | --kubernetes-version=v1.17.3                                                                                                      |                                     |          |         |                               |                               |
	| ssh        | -p                                                                                                                                | test-preload-20220125164445-11219   | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:48:09 PST | Tue, 25 Jan 2022 16:48:10 PST |
	|            | test-preload-20220125164445-11219                                                                                                 |                                     |          |         |                               |                               |
	|            | -- docker images                                                                                                                  |                                     |          |         |                               |                               |
	| delete     | -p                                                                                                                                | test-preload-20220125164445-11219   | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:48:10 PST | Tue, 25 Jan 2022 16:48:23 PST |
	|            | test-preload-20220125164445-11219                                                                                                 |                                     |          |         |                               |                               |
	| start      | -p                                                                                                                                | scheduled-stop-20220125164823-11219 | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:48:23 PST | Tue, 25 Jan 2022 16:49:36 PST |
	|            | scheduled-stop-20220125164823-11219                                                                                               |                                     |          |         |                               |                               |
	|            | --memory=2048 --driver=docker                                                                                                     |                                     |          |         |                               |                               |
	| stop       | -p                                                                                                                                | scheduled-stop-20220125164823-11219 | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:49:37 PST | Tue, 25 Jan 2022 16:49:37 PST |
	|            | scheduled-stop-20220125164823-11219                                                                                               |                                     |          |         |                               |                               |
	|            | --cancel-scheduled                                                                                                                |                                     |          |         |                               |                               |
	| stop       | -p                                                                                                                                | scheduled-stop-20220125164823-11219 | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:50:03 PST | Tue, 25 Jan 2022 16:50:35 PST |
	|            | scheduled-stop-20220125164823-11219                                                                                               |                                     |          |         |                               |                               |
	|            | --schedule 15s                                                                                                                    |                                     |          |         |                               |                               |
	| delete     | -p                                                                                                                                | scheduled-stop-20220125164823-11219 | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:50:49 PST | Tue, 25 Jan 2022 16:50:55 PST |
	|            | scheduled-stop-20220125164823-11219                                                                                               |                                     |          |         |                               |                               |
	| start      | -p                                                                                                                                | skaffold-20220125165055-11219       | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:50:56 PST | Tue, 25 Jan 2022 16:52:10 PST |
	|            | skaffold-20220125165055-11219                                                                                                     |                                     |          |         |                               |                               |
	|            | --memory=2600 --driver=docker                                                                                                     |                                     |          |         |                               |                               |
	| docker-env | --shell none -p                                                                                                                   | skaffold-20220125165055-11219       | skaffold | v1.25.1 | Tue, 25 Jan 2022 16:52:11 PST | Tue, 25 Jan 2022 16:52:12 PST |
	|            | skaffold-20220125165055-11219                                                                                                     |                                     |          |         |                               |                               |
	|            | --user=skaffold                                                                                                                   |                                     |          |         |                               |                               |
	|------------|-----------------------------------------------------------------------------------------------------------------------------------|-------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/25 16:50:56
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0125 16:50:56.765708   20778 out.go:297] Setting OutFile to fd 1 ...
	I0125 16:50:56.765842   20778 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:50:56.765845   20778 out.go:310] Setting ErrFile to fd 2...
	I0125 16:50:56.765847   20778 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:50:56.765935   20778 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/bin
	I0125 16:50:56.766276   20778 out.go:304] Setting JSON to false
	I0125 16:50:56.793282   20778 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":8431,"bootTime":1643149825,"procs":314,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0125 16:50:56.793364   20778 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0125 16:50:56.819412   20778 out.go:176] * [skaffold-20220125165055-11219] minikube v1.25.1 on Darwin 11.1
	I0125 16:50:56.819532   20778 notify.go:174] Checking for updates...
	I0125 16:50:56.873282   20778 out.go:176]   - MINIKUBE_LOCATION=13326
	I0125 16:50:56.899084   20778 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 16:50:56.925211   20778 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0125 16:50:56.951296   20778 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0125 16:50:56.977124   20778 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	I0125 16:50:56.977349   20778 driver.go:344] Setting default libvirt URI to qemu:///system
	I0125 16:50:57.067177   20778 docker.go:132] docker version: linux-20.10.5
	I0125 16:50:57.067332   20778 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 16:50:57.230631   20778 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2022-01-26 00:50:57.177818773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 16:50:57.277678   20778 out.go:176] * Using the docker driver based on user configuration
	I0125 16:50:57.277700   20778 start.go:280] selected driver: docker
	I0125 16:50:57.277711   20778 start.go:795] validating driver "docker" against <nil>
	I0125 16:50:57.277723   20778 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0125 16:50:57.280123   20778 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 16:50:57.435906   20778 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:46 SystemTime:2022-01-26 00:50:57.391444892 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 16:50:57.435993   20778 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0125 16:50:57.436113   20778 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0125 16:50:57.436124   20778 start_flags.go:810] Wait components to verify : map[apiserver:true system_pods:true]
	I0125 16:50:57.436135   20778 cni.go:93] Creating CNI manager for ""
	I0125 16:50:57.436139   20778 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0125 16:50:57.436150   20778 start_flags.go:302] config:
	{Name:skaffold-20220125165055-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:skaffold-20220125165055-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0125 16:50:57.483280   20778 out.go:176] * Starting control plane node skaffold-20220125165055-11219 in cluster skaffold-20220125165055-11219
	I0125 16:50:57.483311   20778 cache.go:120] Beginning downloading kic base image for docker with docker
	I0125 16:50:57.509066   20778 out.go:176] * Pulling base image ...
	I0125 16:50:57.509203   20778 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0125 16:50:57.509203   20778 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0125 16:50:57.509239   20778 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0125 16:50:57.509250   20778 cache.go:57] Caching tarball of preloaded images
	I0125 16:50:57.509416   20778 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0125 16:50:57.509437   20778 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0125 16:50:57.511175   20778 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/config.json ...
	I0125 16:50:57.511212   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/config.json: {Name:mk57c5559872395f787ba2f006c631179b280efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 16:50:57.616828   20778 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0125 16:50:57.616845   20778 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0125 16:50:57.616856   20778 cache.go:208] Successfully downloaded all kic artifacts
	I0125 16:50:57.616904   20778 start.go:313] acquiring machines lock for skaffold-20220125165055-11219: {Name:mk040ff7b68a90c05c6de367daeb78cf9bf8ca6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 16:50:57.617046   20778 start.go:317] acquired machines lock for "skaffold-20220125165055-11219" in 132.165µs
	I0125 16:50:57.617096   20778 start.go:89] Provisioning new machine with config: &{Name:skaffold-20220125165055-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:skaffold-20220125165055-11219 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}
	I0125 16:50:57.617178   20778 start.go:126] createHost starting for "" (driver="docker")
	I0125 16:50:57.664627   20778 out.go:203] * Creating docker container (CPUs=2, Memory=2600MB) ...
	I0125 16:50:57.664821   20778 start.go:160] libmachine.API.Create for "skaffold-20220125165055-11219" (driver="docker")
	I0125 16:50:57.664843   20778 client.go:168] LocalClient.Create starting
	I0125 16:50:57.664936   20778 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem
	I0125 16:50:57.664973   20778 main.go:130] libmachine: Decoding PEM data...
	I0125 16:50:57.664985   20778 main.go:130] libmachine: Parsing certificate...
	I0125 16:50:57.665037   20778 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem
	I0125 16:50:57.665063   20778 main.go:130] libmachine: Decoding PEM data...
	I0125 16:50:57.665073   20778 main.go:130] libmachine: Parsing certificate...
	I0125 16:50:57.665677   20778 cli_runner.go:133] Run: docker network inspect skaffold-20220125165055-11219 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0125 16:50:57.771595   20778 cli_runner.go:180] docker network inspect skaffold-20220125165055-11219 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0125 16:50:57.771719   20778 network_create.go:254] running [docker network inspect skaffold-20220125165055-11219] to gather additional debugging logs...
	I0125 16:50:57.771742   20778 cli_runner.go:133] Run: docker network inspect skaffold-20220125165055-11219
	W0125 16:50:57.881384   20778 cli_runner.go:180] docker network inspect skaffold-20220125165055-11219 returned with exit code 1
	I0125 16:50:57.881403   20778 network_create.go:257] error running [docker network inspect skaffold-20220125165055-11219]: docker network inspect skaffold-20220125165055-11219: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: skaffold-20220125165055-11219
	I0125 16:50:57.881423   20778 network_create.go:259] output of [docker network inspect skaffold-20220125165055-11219]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: skaffold-20220125165055-11219
	
	** /stderr **
	I0125 16:50:57.881525   20778 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0125 16:50:57.991654   20778 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000186a88] misses:0}
	I0125 16:50:57.991689   20778 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0125 16:50:57.991710   20778 network_create.go:106] attempt to create docker network skaffold-20220125165055-11219 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0125 16:50:57.991792   20778 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220125165055-11219
	I0125 16:51:02.481753   20778 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true skaffold-20220125165055-11219: (4.489960393s)
	I0125 16:51:02.481771   20778 network_create.go:90] docker network skaffold-20220125165055-11219 192.168.49.0/24 created
	I0125 16:51:02.481789   20778 kic.go:106] calculated static IP "192.168.49.2" for the "skaffold-20220125165055-11219" container
	I0125 16:51:02.481909   20778 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0125 16:51:02.588989   20778 cli_runner.go:133] Run: docker volume create skaffold-20220125165055-11219 --label name.minikube.sigs.k8s.io=skaffold-20220125165055-11219 --label created_by.minikube.sigs.k8s.io=true
	I0125 16:51:02.702028   20778 oci.go:102] Successfully created a docker volume skaffold-20220125165055-11219
	I0125 16:51:02.702149   20778 cli_runner.go:133] Run: docker run --rm --name skaffold-20220125165055-11219-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20220125165055-11219 --entrypoint /usr/bin/test -v skaffold-20220125165055-11219:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I0125 16:51:03.219385   20778 oci.go:106] Successfully prepared a docker volume skaffold-20220125165055-11219
	I0125 16:51:03.219429   20778 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0125 16:51:03.219440   20778 kic.go:179] Starting extracting preloaded images to volume ...
	I0125 16:51:03.219573   20778 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-20220125165055-11219:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I0125 16:51:09.652410   20778 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v skaffold-20220125165055-11219:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (6.432838881s)
	I0125 16:51:09.652426   20778 kic.go:188] duration metric: took 6.433038 seconds to extract preloaded images to volume
	I0125 16:51:09.652559   20778 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0125 16:51:09.821485   20778 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-20220125165055-11219 --name skaffold-20220125165055-11219 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20220125165055-11219 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-20220125165055-11219 --network skaffold-20220125165055-11219 --ip 192.168.49.2 --volume skaffold-20220125165055-11219:/var --security-opt apparmor=unconfined --memory=2600mb --memory-swap=2600mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I0125 16:51:18.478743   20778 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname skaffold-20220125165055-11219 --name skaffold-20220125165055-11219 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=skaffold-20220125165055-11219 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=skaffold-20220125165055-11219 --network skaffold-20220125165055-11219 --ip 192.168.49.2 --volume skaffold-20220125165055-11219:/var --security-opt apparmor=unconfined --memory=2600mb --memory-swap=2600mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: (8.657234405s)
	I0125 16:51:18.478869   20778 cli_runner.go:133] Run: docker container inspect skaffold-20220125165055-11219 --format={{.State.Running}}
	I0125 16:51:18.586351   20778 cli_runner.go:133] Run: docker container inspect skaffold-20220125165055-11219 --format={{.State.Status}}
	I0125 16:51:18.691811   20778 cli_runner.go:133] Run: docker exec skaffold-20220125165055-11219 stat /var/lib/dpkg/alternatives/iptables
	I0125 16:51:18.853154   20778 oci.go:281] the created container "skaffold-20220125165055-11219" has a running status.
	I0125 16:51:18.853177   20778 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/skaffold-20220125165055-11219/id_rsa...
	I0125 16:51:18.976716   20778 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/skaffold-20220125165055-11219/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0125 16:51:19.134066   20778 cli_runner.go:133] Run: docker container inspect skaffold-20220125165055-11219 --format={{.State.Status}}
	I0125 16:51:19.244756   20778 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0125 16:51:19.244772   20778 kic_runner.go:114] Args: [docker exec --privileged skaffold-20220125165055-11219 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0125 16:51:19.415051   20778 cli_runner.go:133] Run: docker container inspect skaffold-20220125165055-11219 --format={{.State.Status}}
	I0125 16:51:19.518817   20778 machine.go:88] provisioning docker machine ...
	I0125 16:51:19.518848   20778 ubuntu.go:169] provisioning hostname "skaffold-20220125165055-11219"
	I0125 16:51:19.518954   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:51:19.624706   20778 main.go:130] libmachine: Using SSH client type: native
	I0125 16:51:19.624919   20778 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 59742 <nil> <nil>}
	I0125 16:51:19.624932   20778 main.go:130] libmachine: About to run SSH command:
	sudo hostname skaffold-20220125165055-11219 && echo "skaffold-20220125165055-11219" | sudo tee /etc/hostname
	I0125 16:51:19.626480   20778 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0125 16:51:22.772645   20778 main.go:130] libmachine: SSH cmd err, output: <nil>: skaffold-20220125165055-11219
	
	I0125 16:51:22.772734   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:51:22.875134   20778 main.go:130] libmachine: Using SSH client type: native
	I0125 16:51:22.875269   20778 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 59742 <nil> <nil>}
	I0125 16:51:22.875279   20778 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sskaffold-20220125165055-11219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 skaffold-20220125165055-11219/g' /etc/hosts;
				else 
					echo '127.0.1.1 skaffold-20220125165055-11219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0125 16:51:23.011428   20778 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0125 16:51:23.011443   20778 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube}
	I0125 16:51:23.011458   20778 ubuntu.go:177] setting up certificates
	I0125 16:51:23.011465   20778 provision.go:83] configureAuth start
	I0125 16:51:23.011553   20778 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20220125165055-11219
	I0125 16:51:23.116904   20778 provision.go:138] copyHostCerts
	I0125 16:51:23.116997   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem, removing ...
	I0125 16:51:23.117003   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem
	I0125 16:51:23.117117   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem (1082 bytes)
	I0125 16:51:23.117334   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem, removing ...
	I0125 16:51:23.117343   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem
	I0125 16:51:23.117401   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem (1123 bytes)
	I0125 16:51:23.117542   20778 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem, removing ...
	I0125 16:51:23.117545   20778 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem
	I0125 16:51:23.117607   20778 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem (1675 bytes)
	I0125 16:51:23.117726   20778 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem org=jenkins.skaffold-20220125165055-11219 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube skaffold-20220125165055-11219]
	I0125 16:51:23.308764   20778 provision.go:172] copyRemoteCerts
	I0125 16:51:23.308943   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0125 16:51:23.309021   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:51:23.410879   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59742 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/skaffold-20220125165055-11219/id_rsa Username:docker}
	I0125 16:51:23.504917   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0125 16:51:23.521548   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0125 16:51:23.539549   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0125 16:51:23.556207   20778 provision.go:86] duration metric: configureAuth took 544.736545ms
	I0125 16:51:23.556216   20778 ubuntu.go:193] setting minikube options for container-runtime
	I0125 16:51:23.556389   20778 config.go:176] Loaded profile config "skaffold-20220125165055-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 16:51:23.556465   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:51:23.658098   20778 main.go:130] libmachine: Using SSH client type: native
	I0125 16:51:23.658254   20778 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 59742 <nil> <nil>}
	I0125 16:51:23.658260   20778 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0125 16:51:23.797227   20778 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0125 16:51:23.797236   20778 ubuntu.go:71] root file system type: overlay
	I0125 16:51:23.797382   20778 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0125 16:51:23.797478   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:51:23.899617   20778 main.go:130] libmachine: Using SSH client type: native
	I0125 16:51:23.899778   20778 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 59742 <nil> <nil>}
	I0125 16:51:23.899826   20778 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0125 16:51:24.046539   20778 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0125 16:51:24.046668   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:51:24.149874   20778 main.go:130] libmachine: Using SSH client type: native
	I0125 16:51:24.150040   20778 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 59742 <nil> <nil>}
	I0125 16:51:24.150049   20778 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0125 16:51:46.861454   20778 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-01-26 00:51:24.051874044 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0125 16:51:46.861475   20778 machine.go:91] provisioned docker machine in 27.342868397s
	I0125 16:51:46.861482   20778 client.go:171] LocalClient.Create took 49.197042084s
	I0125 16:51:46.861496   20778 start.go:168] duration metric: libmachine.API.Create for "skaffold-20220125165055-11219" took 49.197081327s
	I0125 16:51:46.861509   20778 start.go:267] post-start starting for "skaffold-20220125165055-11219" (driver="docker")
	I0125 16:51:46.861515   20778 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0125 16:51:46.861611   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0125 16:51:46.862283   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:51:46.966378   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59742 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/skaffold-20220125165055-11219/id_rsa Username:docker}
	I0125 16:51:47.061785   20778 ssh_runner.go:195] Run: cat /etc/os-release
	I0125 16:51:47.065814   20778 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0125 16:51:47.065828   20778 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0125 16:51:47.065837   20778 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0125 16:51:47.065842   20778 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0125 16:51:47.065849   20778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/addons for local assets ...
	I0125 16:51:47.066195   20778 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files for local assets ...
	I0125 16:51:47.066574   20778 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem -> 112192.pem in /etc/ssl/certs
	I0125 16:51:47.066745   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0125 16:51:47.074251   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem --> /etc/ssl/certs/112192.pem (1708 bytes)
	I0125 16:51:47.090274   20778 start.go:270] post-start completed in 228.759385ms
	I0125 16:51:47.091097   20778 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20220125165055-11219
	I0125 16:51:47.194036   20778 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/config.json ...
	I0125 16:51:47.194465   20778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0125 16:51:47.194530   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:51:47.297708   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59742 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/skaffold-20220125165055-11219/id_rsa Username:docker}
	I0125 16:51:47.393370   20778 start.go:129] duration metric: createHost completed in 49.776595869s
	I0125 16:51:47.393384   20778 start.go:80] releasing machines lock for "skaffold-20220125165055-11219", held for 49.776743653s
	I0125 16:51:47.393498   20778 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" skaffold-20220125165055-11219
	I0125 16:51:47.497549   20778 ssh_runner.go:195] Run: systemctl --version
	I0125 16:51:47.497621   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:51:47.498087   20778 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0125 16:51:47.498307   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:51:47.609630   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59742 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/skaffold-20220125165055-11219/id_rsa Username:docker}
	I0125 16:51:47.609648   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59742 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/skaffold-20220125165055-11219/id_rsa Username:docker}
	I0125 16:51:47.702521   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0125 16:51:47.891030   20778 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0125 16:51:47.900509   20778 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0125 16:51:47.900561   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0125 16:51:47.909471   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0125 16:51:47.922088   20778 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0125 16:51:47.982612   20778 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0125 16:51:48.038442   20778 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0125 16:51:48.048489   20778 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0125 16:51:48.103012   20778 ssh_runner.go:195] Run: sudo systemctl start docker
	I0125 16:51:48.112878   20778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0125 16:51:48.149791   20778 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0125 16:51:48.235757   20778 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0125 16:51:48.235978   20778 cli_runner.go:133] Run: docker exec -t skaffold-20220125165055-11219 dig +short host.docker.internal
	I0125 16:51:48.397212   20778 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0125 16:51:48.398264   20778 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0125 16:51:48.402696   20778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0125 16:51:48.412177   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:51:48.542227   20778 out.go:176]   - kubelet.housekeeping-interval=5m
	I0125 16:51:48.542344   20778 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0125 16:51:48.542458   20778 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0125 16:51:48.573410   20778 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0125 16:51:48.573417   20778 docker.go:537] Images already preloaded, skipping extraction
	I0125 16:51:48.573509   20778 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0125 16:51:48.603855   20778 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0125 16:51:48.603876   20778 cache_images.go:84] Images are preloaded, skipping loading
	I0125 16:51:48.603970   20778 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0125 16:51:48.680113   20778 cni.go:93] Creating CNI manager for ""
	I0125 16:51:48.680122   20778 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0125 16:51:48.680137   20778 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0125 16:51:48.680149   20778 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:skaffold-20220125165055-11219 NodeName:skaffold-20220125165055-11219 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/
lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0125 16:51:48.680262   20778 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "skaffold-20220125165055-11219"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0125 16:51:48.680347   20778 kubeadm.go:791] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=skaffold-20220125165055-11219 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:skaffold-20220125165055-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0125 16:51:48.680410   20778 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0125 16:51:48.687988   20778 binaries.go:44] Found k8s binaries, skipping transfer
	I0125 16:51:48.688050   20778 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0125 16:51:48.694932   20778 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0125 16:51:48.707254   20778 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0125 16:51:48.719749   20778 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes)
	I0125 16:51:48.732006   20778 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0125 16:51:48.735685   20778 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0125 16:51:48.744938   20778 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219 for IP: 192.168.49.2
	I0125 16:51:48.745049   20778 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.key
	I0125 16:51:48.745103   20778 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.key
	I0125 16:51:48.745163   20778 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/client.key
	I0125 16:51:48.745176   20778 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/client.crt with IP's: []
	I0125 16:51:48.835287   20778 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/client.crt ...
	I0125 16:51:48.835299   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/client.crt: {Name:mk398ef10d709eedb02155537d9a672d1bc27e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 16:51:48.838763   20778 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/client.key ...
	I0125 16:51:48.838783   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/client.key: {Name:mk43f5037512760e49a40587da2f115562f64984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 16:51:48.839773   20778 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/apiserver.key.dd3b5fb2
	I0125 16:51:48.839816   20778 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0125 16:51:48.976399   20778 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/apiserver.crt.dd3b5fb2 ...
	I0125 16:51:48.976407   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/apiserver.crt.dd3b5fb2: {Name:mk74b1b3b74d2bef42d0f81a0cfcca6a20032d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 16:51:48.977500   20778 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/apiserver.key.dd3b5fb2 ...
	I0125 16:51:48.977527   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/apiserver.key.dd3b5fb2: {Name:mk3e0368e333fe27494af8fa88bfae667db0804d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 16:51:48.978279   20778 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/apiserver.crt
	I0125 16:51:48.978458   20778 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/apiserver.key
	I0125 16:51:48.978610   20778 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/proxy-client.key
	I0125 16:51:48.978625   20778 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/proxy-client.crt with IP's: []
	I0125 16:51:49.079001   20778 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/proxy-client.crt ...
	I0125 16:51:49.079009   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/proxy-client.crt: {Name:mkc84fc012888031d9d6d339aab993fead26b7c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 16:51:49.080422   20778 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/proxy-client.key ...
	I0125 16:51:49.080434   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/proxy-client.key: {Name:mk81b950a220aaccd2bd17b0fc951960ec5cee68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 16:51:49.081586   20778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219.pem (1338 bytes)
	W0125 16:51:49.081634   20778 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219_empty.pem, impossibly tiny 0 bytes
	I0125 16:51:49.081647   20778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem (1675 bytes)
	I0125 16:51:49.081684   20778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem (1082 bytes)
	I0125 16:51:49.081722   20778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem (1123 bytes)
	I0125 16:51:49.081755   20778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem (1675 bytes)
	I0125 16:51:49.081829   20778 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem (1708 bytes)
	I0125 16:51:49.082636   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0125 16:51:49.100509   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0125 16:51:49.117063   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0125 16:51:49.133166   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/skaffold-20220125165055-11219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0125 16:51:49.149240   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0125 16:51:49.165280   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0125 16:51:49.181354   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0125 16:51:49.196984   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0125 16:51:49.213066   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem --> /usr/share/ca-certificates/112192.pem (1708 bytes)
	I0125 16:51:49.229301   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0125 16:51:49.245894   20778 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219.pem --> /usr/share/ca-certificates/11219.pem (1338 bytes)
	I0125 16:51:49.261927   20778 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0125 16:51:49.275234   20778 ssh_runner.go:195] Run: openssl version
	I0125 16:51:49.280959   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112192.pem && ln -fs /usr/share/ca-certificates/112192.pem /etc/ssl/certs/112192.pem"
	I0125 16:51:49.289101   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112192.pem
	I0125 16:51:49.292950   20778 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 26 00:05 /usr/share/ca-certificates/112192.pem
	I0125 16:51:49.292988   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112192.pem
	I0125 16:51:49.298363   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112192.pem /etc/ssl/certs/3ec20f2e.0"
	I0125 16:51:49.305917   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0125 16:51:49.313327   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0125 16:51:49.317288   20778 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 26 00:00 /usr/share/ca-certificates/minikubeCA.pem
	I0125 16:51:49.317336   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0125 16:51:49.323121   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0125 16:51:49.330821   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11219.pem && ln -fs /usr/share/ca-certificates/11219.pem /etc/ssl/certs/11219.pem"
	I0125 16:51:49.338730   20778 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11219.pem
	I0125 16:51:49.342735   20778 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 26 00:05 /usr/share/ca-certificates/11219.pem
	I0125 16:51:49.342774   20778 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11219.pem
	I0125 16:51:49.348115   20778 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11219.pem /etc/ssl/certs/51391683.0"
	I0125 16:51:49.355731   20778 kubeadm.go:388] StartCluster: {Name:skaffold-20220125165055-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2600 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:skaffold-20220125165055-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0125 16:51:49.355833   20778 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0125 16:51:49.385553   20778 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0125 16:51:49.393630   20778 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0125 16:51:49.400702   20778 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0125 16:51:49.400745   20778 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0125 16:51:49.407819   20778 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0125 16:51:49.407834   20778 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0125 16:51:49.912175   20778 out.go:203]   - Generating certificates and keys ...
	I0125 16:51:52.258333   20778 out.go:203]   - Booting up control plane ...
	I0125 16:52:06.293812   20778 out.go:203]   - Configuring RBAC rules ...
	I0125 16:52:06.677131   20778 cni.go:93] Creating CNI manager for ""
	I0125 16:52:06.677137   20778 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0125 16:52:06.677157   20778 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0125 16:52:06.677248   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 16:52:06.677250   20778 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=f2b90e74c34b616e7f63aca230995ce4db99c965 minikube.k8s.io/name=skaffold-20220125165055-11219 minikube.k8s.io/updated_at=2022_01_25T16_52_06_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 16:52:06.829418   20778 ops.go:34] apiserver oom_adj: -16
	I0125 16:52:06.829428   20778 kubeadm.go:867] duration metric: took 152.240385ms to wait for elevateKubeSystemPrivileges.
	I0125 16:52:06.925700   20778 kubeadm.go:390] StartCluster complete in 17.570116149s
	I0125 16:52:06.925736   20778 settings.go:142] acquiring lock: {Name:mk4b38f66d2c1d7ad910ce332a6e0f9663533ce8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 16:52:06.925839   20778 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 16:52:06.926523   20778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig: {Name:mk22ac11166e634b93c7a48f1f20a682ee77d8e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 16:52:07.445232   20778 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "skaffold-20220125165055-11219" rescaled to 1
	I0125 16:52:07.445278   20778 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}
	I0125 16:52:07.445288   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0125 16:52:07.445335   20778 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0125 16:52:07.473593   20778 out.go:176] * Verifying Kubernetes components...
	I0125 16:52:07.445473   20778 config.go:176] Loaded profile config "skaffold-20220125165055-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 16:52:07.473642   20778 addons.go:65] Setting storage-provisioner=true in profile "skaffold-20220125165055-11219"
	I0125 16:52:07.473652   20778 addons.go:65] Setting default-storageclass=true in profile "skaffold-20220125165055-11219"
	I0125 16:52:07.473669   20778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0125 16:52:07.473672   20778 addons.go:153] Setting addon storage-provisioner=true in "skaffold-20220125165055-11219"
	W0125 16:52:07.473677   20778 addons.go:165] addon storage-provisioner should already be in state true
	I0125 16:52:07.473680   20778 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "skaffold-20220125165055-11219"
	I0125 16:52:07.473702   20778 host.go:66] Checking if "skaffold-20220125165055-11219" exists ...
	I0125 16:52:07.473999   20778 cli_runner.go:133] Run: docker container inspect skaffold-20220125165055-11219 --format={{.State.Status}}
	I0125 16:52:07.495131   20778 cli_runner.go:133] Run: docker container inspect skaffold-20220125165055-11219 --format={{.State.Status}}
	I0125 16:52:07.505735   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:52:07.505790   20778 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0125 16:52:07.625278   20778 addons.go:153] Setting addon default-storageclass=true in "skaffold-20220125165055-11219"
	W0125 16:52:07.625290   20778 addons.go:165] addon default-storageclass should already be in state true
	I0125 16:52:07.625304   20778 host.go:66] Checking if "skaffold-20220125165055-11219" exists ...
	I0125 16:52:07.625815   20778 cli_runner.go:133] Run: docker container inspect skaffold-20220125165055-11219 --format={{.State.Status}}
	I0125 16:52:07.671281   20778 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0125 16:52:07.671439   20778 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0125 16:52:07.671449   20778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0125 16:52:07.671551   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:52:07.678073   20778 api_server.go:51] waiting for apiserver process to appear ...
	I0125 16:52:07.678121   20778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0125 16:52:07.773343   20778 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0125 16:52:07.773371   20778 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0125 16:52:07.773521   20778 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" skaffold-20220125165055-11219
	I0125 16:52:07.805359   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59742 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/skaffold-20220125165055-11219/id_rsa Username:docker}
	I0125 16:52:07.899288   20778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59742 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/skaffold-20220125165055-11219/id_rsa Username:docker}
	I0125 16:52:07.913728   20778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0125 16:52:08.007684   20778 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0125 16:52:08.315583   20778 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0125 16:52:08.315586   20778 api_server.go:71] duration metric: took 870.284279ms to wait for apiserver process to appear ...
	I0125 16:52:08.315613   20778 api_server.go:87] waiting for apiserver healthz status ...
	I0125 16:52:08.315623   20778 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:59746/healthz ...
	I0125 16:52:08.321987   20778 api_server.go:266] https://127.0.0.1:59746/healthz returned 200:
	ok
	I0125 16:52:08.323584   20778 api_server.go:140] control plane version: v1.23.2
	I0125 16:52:08.323591   20778 api_server.go:130] duration metric: took 7.976005ms to wait for apiserver health ...
	I0125 16:52:08.323597   20778 system_pods.go:43] waiting for kube-system pods to appear ...
	I0125 16:52:08.329384   20778 system_pods.go:59] 0 kube-system pods found
	I0125 16:52:08.329400   20778 retry.go:31] will retry after 263.082536ms: only 0 pod(s) have shown up
	I0125 16:52:08.397032   20778 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0125 16:52:08.397045   20778 addons.go:417] enableAddons completed in 951.734737ms
	I0125 16:52:08.600661   20778 system_pods.go:59] 1 kube-system pods found
	I0125 16:52:08.600674   20778 system_pods.go:61] "storage-provisioner" [cfabf725-7cc8-4204-b53a-045fae4a6fc2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0125 16:52:08.600681   20778 retry.go:31] will retry after 381.329545ms: only 1 pod(s) have shown up
	I0125 16:52:08.993234   20778 system_pods.go:59] 1 kube-system pods found
	I0125 16:52:08.993244   20778 system_pods.go:61] "storage-provisioner" [cfabf725-7cc8-4204-b53a-045fae4a6fc2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0125 16:52:08.993250   20778 retry.go:31] will retry after 422.765636ms: only 1 pod(s) have shown up
	I0125 16:52:09.421223   20778 system_pods.go:59] 1 kube-system pods found
	I0125 16:52:09.421232   20778 system_pods.go:61] "storage-provisioner" [cfabf725-7cc8-4204-b53a-045fae4a6fc2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0125 16:52:09.421241   20778 retry.go:31] will retry after 473.074753ms: only 1 pod(s) have shown up
	I0125 16:52:09.902831   20778 system_pods.go:59] 1 kube-system pods found
	I0125 16:52:09.902842   20778 system_pods.go:61] "storage-provisioner" [cfabf725-7cc8-4204-b53a-045fae4a6fc2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0125 16:52:09.902849   20778 retry.go:31] will retry after 587.352751ms: only 1 pod(s) have shown up
	I0125 16:52:10.506410   20778 system_pods.go:59] 5 kube-system pods found
	I0125 16:52:10.506418   20778 system_pods.go:61] "etcd-skaffold-20220125165055-11219" [7ad103e3-9304-4e63-a9a5-233c2cb35fd0] Pending
	I0125 16:52:10.506421   20778 system_pods.go:61] "kube-apiserver-skaffold-20220125165055-11219" [3cd30b81-79e5-47ba-aec4-2f7936a123a7] Pending
	I0125 16:52:10.506430   20778 system_pods.go:61] "kube-controller-manager-skaffold-20220125165055-11219" [56826c63-07c8-448d-9a81-e3844e78b219] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0125 16:52:10.506433   20778 system_pods.go:61] "kube-scheduler-skaffold-20220125165055-11219" [e04d1fee-9fec-42be-b43a-e1862b98ec90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0125 16:52:10.506440   20778 system_pods.go:61] "storage-provisioner" [cfabf725-7cc8-4204-b53a-045fae4a6fc2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0125 16:52:10.506445   20778 system_pods.go:74] duration metric: took 2.182863297s to wait for pod list to return data ...
	I0125 16:52:10.506450   20778 kubeadm.go:542] duration metric: took 3.061168195s to wait for : map[apiserver:true system_pods:true] ...
	I0125 16:52:10.506456   20778 node_conditions.go:102] verifying NodePressure condition ...
	I0125 16:52:10.509286   20778 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0125 16:52:10.509295   20778 node_conditions.go:123] node cpu capacity is 6
	I0125 16:52:10.509303   20778 node_conditions.go:105] duration metric: took 2.845554ms to run NodePressure ...
	I0125 16:52:10.509310   20778 start.go:213] waiting for startup goroutines ...
	I0125 16:52:10.547426   20778 start.go:493] kubectl: 1.19.7, cluster: 1.23.2 (minor skew: 4)
	I0125 16:52:10.573426   20778 out.go:176] 
	W0125 16:52:10.573654   20778 out.go:241] ! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.23.2.
	I0125 16:52:10.599939   20778 out.go:176]   - Want kubectl v1.23.2? Try 'minikube kubectl -- get pods -A'
	I0125 16:52:10.626756   20778 out.go:176] * Done! kubectl is now configured to use "skaffold-20220125165055-11219" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-01-26 00:51:20 UTC, end at Wed 2022-01-26 00:52:15 UTC. --
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[227]: time="2022-01-26T00:51:30.700708206Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[227]: time="2022-01-26T00:51:30.701735866Z" level=info msg="Daemon shutdown complete"
	Jan 26 00:51:30 skaffold-20220125165055-11219 systemd[1]: docker.service: Succeeded.
	Jan 26 00:51:30 skaffold-20220125165055-11219 systemd[1]: Stopped Docker Application Container Engine.
	Jan 26 00:51:30 skaffold-20220125165055-11219 systemd[1]: Starting Docker Application Container Engine...
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.745076902Z" level=info msg="Starting up"
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.747017898Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.747050918Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.747069238Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.747076665Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.748124596Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.748226209Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.748284282Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.748385551Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.752337017Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.756903700Z" level=warning msg="Your kernel does not support cgroup blkio weight"
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.756933950Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
	Jan 26 00:51:30 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:30.757116874Z" level=info msg="Loading containers: start."
	Jan 26 00:51:42 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:42.284627125Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 26 00:51:46 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:46.833575538Z" level=info msg="Loading containers: done."
	Jan 26 00:51:46 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:46.846589654Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12
	Jan 26 00:51:46 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:46.846649193Z" level=info msg="Daemon has completed initialization"
	Jan 26 00:51:46 skaffold-20220125165055-11219 systemd[1]: Started Docker Application Container Engine.
	Jan 26 00:51:46 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:46.870560464Z" level=info msg="API listen on [::]:2376"
	Jan 26 00:51:46 skaffold-20220125165055-11219 dockerd[466]: time="2022-01-26T00:51:46.873545381Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	2651a1c011d81       6114d758d6d16       16 seconds ago      Running             kube-scheduler            0                   46a1ef41ee426
	ccb2ddc621724       25f8c7f3da61c       16 seconds ago      Running             etcd                      0                   5830d77d9b0b9
	95ccf34b05036       8a0228dd6a683       16 seconds ago      Running             kube-apiserver            0                   03712f57ff881
	90bc63da2a264       4783639ba7e03       16 seconds ago      Running             kube-controller-manager   0                   1d399ba7161f0
	
	* 
	* ==> describe nodes <==
	* Name:               skaffold-20220125165055-11219
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=skaffold-20220125165055-11219
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f2b90e74c34b616e7f63aca230995ce4db99c965
	                    minikube.k8s.io/name=skaffold-20220125165055-11219
	                    minikube.k8s.io/updated_at=2022_01_25T16_52_06_0700
	                    minikube.k8s.io/version=v1.25.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 26 Jan 2022 00:52:05 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  skaffold-20220125165055-11219
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 26 Jan 2022 00:52:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 26 Jan 2022 00:52:06 +0000   Wed, 26 Jan 2022 00:52:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 26 Jan 2022 00:52:06 +0000   Wed, 26 Jan 2022 00:52:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 26 Jan 2022 00:52:06 +0000   Wed, 26 Jan 2022 00:52:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 26 Jan 2022 00:52:06 +0000   Wed, 26 Jan 2022 00:52:05 +0000   KubeletNotReady              [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    skaffold-20220125165055-11219
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61255492Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6088600Ki
	  pods:               110
	System Info:
	  Machine ID:                 8de776e053e140d6a14c2d2def3d6bb8
	  System UUID:                c33e763d-3b63-4a36-b19f-006788ba319a
	  Boot ID:                    64eaa28b-2bea-4721-8bf9-d8b79f6942f4
	  Kernel Version:             5.10.25-linuxkit
	  OS Image:                   Ubuntu 20.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.12
	  Kubelet Version:            v1.23.2
	  Kube-Proxy Version:         v1.23.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-skaffold-20220125165055-11219                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5s
	  kube-system                 kube-apiserver-skaffold-20220125165055-11219             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-controller-manager-skaffold-20220125165055-11219    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-scheduler-skaffold-20220125165055-11219             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 9s    kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s    kubelet  Node skaffold-20220125165055-11219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s    kubelet  Node skaffold-20220125165055-11219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s    kubelet  Node skaffold-20220125165055-11219 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s    kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.035973] bpfilter: write fail -32
	[  +0.028219] bpfilter: read fail 0
	[  +0.031004] bpfilter: write fail -32
	[  +0.025328] bpfilter: write fail -32
	[  +0.032514] bpfilter: read fail 0
	[  +0.035861] bpfilter: read fail 0
	[  +0.035092] bpfilter: write fail -32
	[  +0.034857] bpfilter: write fail -32
	[  +0.044450] bpfilter: read fail 0
	[  +0.030230] bpfilter: write fail -32
	[  +0.032368] bpfilter: write fail -32
	[  +0.031630] bpfilter: read fail 0
	[  +0.027899] bpfilter: read fail 0
	[  +0.035503] bpfilter: read fail 0
	[  +0.037822] bpfilter: read fail 0
	[  +0.031741] bpfilter: read fail 0
	[  +0.026000] bpfilter: read fail 0
	[  +0.023680] bpfilter: read fail 0
	[  +0.035171] bpfilter: read fail 0
	[  +0.029222] bpfilter: read fail 0
	[  +0.033321] bpfilter: read fail 0
	[  +0.029878] bpfilter: write fail -32
	[  +0.030639] bpfilter: read fail 0
	[  +0.025178] bpfilter: write fail -32
	[  +0.034478] bpfilter: write fail -32
	
	* 
	* ==> etcd [ccb2ddc62172] <==
	* {"level":"info","ts":"2022-01-26T00:52:00.031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-01-26T00:52:00.031Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-01-26T00:52:00.034Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-01-26T00:52:00.034Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-01-26T00:52:00.034Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-01-26T00:52:00.034Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-01-26T00:52:00.034Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-01-26T00:52:00.683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-01-26T00:52:00.683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-01-26T00:52:00.683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-01-26T00:52:00.683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-01-26T00:52:00.683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-01-26T00:52:00.683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-01-26T00:52:00.683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-01-26T00:52:00.683Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-01-26T00:52:00.684Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-01-26T00:52:00.684Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-01-26T00:52:00.684Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-01-26T00:52:00.684Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:skaffold-20220125165055-11219 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-01-26T00:52:00.685Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-01-26T00:52:00.685Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-01-26T00:52:00.685Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-01-26T00:52:00.686Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-01-26T00:52:00.686Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-01-26T00:52:00.686Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  00:52:15 up 53 min,  0 users,  load average: 1.87, 1.66, 1.70
	Linux skaffold-20220125165055-11219 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [95ccf34b0503] <==
	* I0126 00:52:02.345903       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0126 00:52:02.376256       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0126 00:52:02.378213       1 controller.go:611] quota admission added evaluator for: namespaces
	I0126 00:52:02.431324       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0126 00:52:02.433991       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0126 00:52:02.436208       1 cache.go:39] Caches are synced for autoregister controller
	I0126 00:52:02.436357       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0126 00:52:02.436561       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0126 00:52:02.445013       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0126 00:52:03.332668       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0126 00:52:03.332739       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0126 00:52:03.338093       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0126 00:52:03.342253       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0126 00:52:03.342281       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0126 00:52:03.628082       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0126 00:52:03.651312       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0126 00:52:03.687523       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0126 00:52:03.691040       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0126 00:52:03.691553       1 controller.go:611] quota admission added evaluator for: endpoints
	I0126 00:52:03.694135       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0126 00:52:04.470002       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0126 00:52:06.472399       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0126 00:52:06.478785       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0126 00:52:06.486705       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0126 00:52:06.649060       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	
	* 
	* ==> kube-controller-manager [90bc63da2a26] <==
	* I0126 00:52:05.079852       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for events.events.k8s.io
	I0126 00:52:05.079929       1 resource_quota_monitor.go:233] QuotaMonitor created object count evaluator for csistoragecapacities.storage.k8s.io
	I0126 00:52:05.079987       1 controllermanager.go:605] Started "resourcequota"
	I0126 00:52:05.080048       1 resource_quota_controller.go:273] Starting resource quota controller
	I0126 00:52:05.080112       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0126 00:52:05.080284       1 resource_quota_monitor.go:308] QuotaMonitor running
	I0126 00:52:05.219745       1 controllermanager.go:605] Started "serviceaccount"
	I0126 00:52:05.219775       1 serviceaccounts_controller.go:117] Starting service account controller
	I0126 00:52:05.219783       1 shared_informer.go:240] Waiting for caches to sync for service account
	I0126 00:52:05.468181       1 garbagecollector.go:146] Starting garbage collector controller
	I0126 00:52:05.468224       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0126 00:52:05.468304       1 graph_builder.go:289] GraphBuilder running
	I0126 00:52:05.468341       1 controllermanager.go:605] Started "garbagecollector"
	I0126 00:52:05.718696       1 controllermanager.go:605] Started "daemonset"
	I0126 00:52:05.718732       1 daemon_controller.go:284] Starting daemon sets controller
	I0126 00:52:05.718737       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
	E0126 00:52:05.868468       1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
	W0126 00:52:05.868505       1 controllermanager.go:583] Skipping "service"
	I0126 00:52:06.019465       1 controllermanager.go:605] Started "attachdetach"
	I0126 00:52:06.019540       1 attach_detach_controller.go:328] Starting attach detach controller
	I0126 00:52:06.019547       1 shared_informer.go:240] Waiting for caches to sync for attach detach
	I0126 00:52:06.171024       1 controllermanager.go:605] Started "statefulset"
	I0126 00:52:06.171077       1 stateful_set.go:147] Starting stateful set controller
	I0126 00:52:06.171083       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0126 00:52:06.218131       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [2651a1c011d8] <==
	* W0126 00:52:02.379591       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0126 00:52:02.379621       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0126 00:52:02.379675       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0126 00:52:02.379704       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0126 00:52:02.379759       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0126 00:52:02.379788       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0126 00:52:02.379945       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0126 00:52:02.379975       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0126 00:52:02.380037       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0126 00:52:02.380067       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0126 00:52:02.380100       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0126 00:52:02.380130       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0126 00:52:02.380171       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0126 00:52:02.380201       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0126 00:52:02.380207       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0126 00:52:02.380210       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0126 00:52:03.302092       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0126 00:52:03.302212       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0126 00:52:03.434824       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0126 00:52:03.434840       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0126 00:52:03.479822       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0126 00:52:03.479857       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0126 00:52:03.500509       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0126 00:52:03.500578       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0126 00:52:05.175598       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-01-26 00:51:20 UTC, end at Wed 2022-01-26 00:52:15 UTC. --
	Jan 26 00:52:09 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:09.920277    1960 kubelet_network_linux.go:57] "Initialized protocol iptables rules." protocol=IPv4
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.235674    1960 kubelet_network_linux.go:57] "Initialized protocol iptables rules." protocol=IPv6
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.235713    1960 status_manager.go:159] "Starting to sync pod status with apiserver"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.235730    1960 kubelet.go:1977] "Starting kubelet main sync loop"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: E0126 00:52:10.235754    1960 kubelet.go:2001] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.337112    1960 topology_manager.go:200] "Topology Admit Handler"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.337323    1960 topology_manager.go:200] "Topology Admit Handler"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.337384    1960 topology_manager.go:200] "Topology Admit Handler"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.337434    1960 topology_manager.go:200] "Topology Admit Handler"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.370400    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5f57d96b4b1084c6e4feb4bc6427c0b-etc-ca-certificates\") pod \"kube-apiserver-skaffold-20220125165055-11219\" (UID: \"a5f57d96b4b1084c6e4feb4bc6427c0b\") " pod="kube-system/kube-apiserver-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.370496    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec67c7ded89da96100eb8435d4b335cd-ca-certs\") pod \"kube-controller-manager-skaffold-20220125165055-11219\" (UID: \"ec67c7ded89da96100eb8435d4b335cd\") " pod="kube-system/kube-controller-manager-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.370576    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec67c7ded89da96100eb8435d4b335cd-usr-local-share-ca-certificates\") pod \"kube-controller-manager-skaffold-20220125165055-11219\" (UID: \"ec67c7ded89da96100eb8435d4b335cd\") " pod="kube-system/kube-controller-manager-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.370637    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ec67c7ded89da96100eb8435d4b335cd-kubeconfig\") pod \"kube-controller-manager-skaffold-20220125165055-11219\" (UID: \"ec67c7ded89da96100eb8435d4b335cd\") " pod="kube-system/kube-controller-manager-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.370671    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/d82dd6e0a614b7834028d377926c9de0-etcd-certs\") pod \"etcd-skaffold-20220125165055-11219\" (UID: \"d82dd6e0a614b7834028d377926c9de0\") " pod="kube-system/etcd-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.370733    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/d82dd6e0a614b7834028d377926c9de0-etcd-data\") pod \"etcd-skaffold-20220125165055-11219\" (UID: \"d82dd6e0a614b7834028d377926c9de0\") " pod="kube-system/etcd-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.370799    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5f57d96b4b1084c6e4feb4bc6427c0b-ca-certs\") pod \"kube-apiserver-skaffold-20220125165055-11219\" (UID: \"a5f57d96b4b1084c6e4feb4bc6427c0b\") " pod="kube-system/kube-apiserver-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.370840    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec67c7ded89da96100eb8435d4b335cd-etc-ca-certificates\") pod \"kube-controller-manager-skaffold-20220125165055-11219\" (UID: \"ec67c7ded89da96100eb8435d4b335cd\") " pod="kube-system/kube-controller-manager-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.370869    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ec67c7ded89da96100eb8435d4b335cd-flexvolume-dir\") pod \"kube-controller-manager-skaffold-20220125165055-11219\" (UID: \"ec67c7ded89da96100eb8435d4b335cd\") " pod="kube-system/kube-controller-manager-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.370949    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec67c7ded89da96100eb8435d4b335cd-k8s-certs\") pod \"kube-controller-manager-skaffold-20220125165055-11219\" (UID: \"ec67c7ded89da96100eb8435d4b335cd\") " pod="kube-system/kube-controller-manager-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.371022    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a12f1ca09f26286cfaff34e27274e6f4-kubeconfig\") pod \"kube-scheduler-skaffold-20220125165055-11219\" (UID: \"a12f1ca09f26286cfaff34e27274e6f4\") " pod="kube-system/kube-scheduler-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.371063    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5f57d96b4b1084c6e4feb4bc6427c0b-k8s-certs\") pod \"kube-apiserver-skaffold-20220125165055-11219\" (UID: \"a5f57d96b4b1084c6e4feb4bc6427c0b\") " pod="kube-system/kube-apiserver-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.371099    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5f57d96b4b1084c6e4feb4bc6427c0b-usr-local-share-ca-certificates\") pod \"kube-apiserver-skaffold-20220125165055-11219\" (UID: \"a5f57d96b4b1084c6e4feb4bc6427c0b\") " pod="kube-system/kube-apiserver-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.371130    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5f57d96b4b1084c6e4feb4bc6427c0b-usr-share-ca-certificates\") pod \"kube-apiserver-skaffold-20220125165055-11219\" (UID: \"a5f57d96b4b1084c6e4feb4bc6427c0b\") " pod="kube-system/kube-apiserver-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.371160    1960 reconciler.go:216] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec67c7ded89da96100eb8435d4b335cd-usr-share-ca-certificates\") pod \"kube-controller-manager-skaffold-20220125165055-11219\" (UID: \"ec67c7ded89da96100eb8435d4b335cd\") " pod="kube-system/kube-controller-manager-skaffold-20220125165055-11219"
	Jan 26 00:52:10 skaffold-20220125165055-11219 kubelet[1960]: I0126 00:52:10.371176    1960 reconciler.go:157] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p skaffold-20220125165055-11219 -n skaffold-20220125165055-11219
helpers_test.go:262: (dbg) Run:  kubectl --context skaffold-20220125165055-11219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:262: (dbg) Done: kubectl --context skaffold-20220125165055-11219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: (2.110369003s)
helpers_test.go:271: non-running pods: coredns-64897985d-hsnfx kube-proxy-fj5q7 storage-provisioner
helpers_test.go:273: ======> post-mortem[TestSkaffold]: describe non-running pods <======
helpers_test.go:276: (dbg) Run:  kubectl --context skaffold-20220125165055-11219 describe pod coredns-64897985d-hsnfx kube-proxy-fj5q7 storage-provisioner
helpers_test.go:276: (dbg) Non-zero exit: kubectl --context skaffold-20220125165055-11219 describe pod coredns-64897985d-hsnfx kube-proxy-fj5q7 storage-provisioner: exit status 1 (59.602647ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-hsnfx" not found
	Error from server (NotFound): pods "kube-proxy-fj5q7" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:278: kubectl --context skaffold-20220125165055-11219 describe pod coredns-64897985d-hsnfx kube-proxy-fj5q7 storage-provisioner: exit status 1
helpers_test.go:176: Cleaning up "skaffold-20220125165055-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-20220125165055-11219
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-20220125165055-11219: (12.645204251s)
--- FAIL: TestSkaffold (96.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (189.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.4116126856.exe start -p running-upgrade-20220125165756-11219 --memory=2200 --vm-driver=docker 
E0125 16:59:04.943858   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
version_upgrade_test.go:127: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.4116126856.exe start -p running-upgrade-20220125165756-11219 --memory=2200 --vm-driver=docker : (1m27.389644932s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-20220125165756-11219 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p running-upgrade-20220125165756-11219 --memory=2200 --alsologtostderr -v=1 --driver=docker : exit status 81 (1m30.428292121s)

                                                
                                                
-- stdout --
	* [running-upgrade-20220125165756-11219] minikube v1.25.1 on Darwin 11.1
	  - MINIKUBE_LOCATION=13326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	* Kubernetes 1.23.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-20220125165756-11219 in cluster running-upgrade-20220125165756-11219
	* Pulling base image ...
	* Updating the running docker "running-upgrade-20220125165756-11219" container ...
	* Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
	  - kubeadm.pod-network-cidr=10.244.0.0/16
	X Problems detected in kubelet:
	  Jan 26 00:59:51 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:51.753158    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	  Jan 26 00:59:52 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:52.857684    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	  Jan 26 00:59:52 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:52.866466    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0125 16:59:23.887574   23465 out.go:297] Setting OutFile to fd 1 ...
	I0125 16:59:23.887709   23465 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:59:23.887714   23465 out.go:310] Setting ErrFile to fd 2...
	I0125 16:59:23.887718   23465 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:59:23.887781   23465 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/bin
	I0125 16:59:23.888048   23465 out.go:304] Setting JSON to false
	I0125 16:59:23.915690   23465 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":8938,"bootTime":1643149825,"procs":322,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0125 16:59:23.915786   23465 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0125 16:59:23.962545   23465 out.go:176] * [running-upgrade-20220125165756-11219] minikube v1.25.1 on Darwin 11.1
	I0125 16:59:23.962785   23465 notify.go:174] Checking for updates...
	I0125 16:59:24.009341   23465 out.go:176]   - MINIKUBE_LOCATION=13326
	I0125 16:59:23.963226   23465 preload.go:306] deleting older generation preload /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
	I0125 16:59:24.035520   23465 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 16:59:24.040902   23465 preload.go:306] deleting older generation preload /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4.checksum
	I0125 16:59:24.066527   23465 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0125 16:59:24.092567   23465 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0125 16:59:24.118636   23465 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	I0125 16:59:24.119471   23465 config.go:176] Loaded profile config "running-upgrade-20220125165756-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0125 16:59:24.119490   23465 start_flags.go:618] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I0125 16:59:24.145440   23465 out.go:176] * Kubernetes 1.23.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.2
	I0125 16:59:24.145481   23465 driver.go:344] Setting default libvirt URI to qemu:///system
	I0125 16:59:24.238734   23465 docker.go:132] docker version: linux-20.10.5
	I0125 16:59:24.238897   23465 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 16:59:24.424037   23465 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:54 SystemTime:2022-01-26 00:59:24.349515697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 16:59:24.513719   23465 out.go:176] * Using the docker driver based on existing profile
	I0125 16:59:24.513794   23465 start.go:280] selected driver: docker
	I0125 16:59:24.513804   23465 start.go:795] validating driver "docker" against &{Name:running-upgrade-20220125165756-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20220125165756-11219 Namespace: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
	I0125 16:59:24.513943   23465 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0125 16:59:24.517682   23465 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 16:59:24.670863   23465 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:54 SystemTime:2022-01-26 00:59:24.627066714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 16:59:24.671041   23465 cni.go:93] Creating CNI manager for ""
	I0125 16:59:24.671055   23465 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0125 16:59:24.671065   23465 start_flags.go:302] config:
	{Name:running-upgrade-20220125165756-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20220125165756-11219 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
	I0125 16:59:24.718558   23465 out.go:176] * Starting control plane node running-upgrade-20220125165756-11219 in cluster running-upgrade-20220125165756-11219
	I0125 16:59:24.718617   23465 cache.go:120] Beginning downloading kic base image for docker with docker
	I0125 16:59:24.760344   23465 out.go:176] * Pulling base image ...
	I0125 16:59:24.760399   23465 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I0125 16:59:24.760469   23465 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	W0125 16:59:24.835870   23465 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v17/v1.18.0/preloaded-images-k8s-v17-v1.18.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0125 16:59:24.835998   23465 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/config.json ...
	I0125 16:59:24.836083   23465 cache.go:107] acquiring lock: {Name:mk453979a91ca5afe4b0109f4d1b0a921a84a2a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 16:59:24.836104   23465 cache.go:107] acquiring lock: {Name:mkbc8a4d73ba92de8642f76fa9de2ecb7ca000ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 16:59:24.836133   23465 cache.go:107] acquiring lock: {Name:mk7c1d1617387f2d4a42c1f78c260cdc6ab917fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 16:59:24.836179   23465 cache.go:107] acquiring lock: {Name:mkcc4622d1e22ae7a7ecff46d6749c583e1ab6b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 16:59:24.836229   23465 cache.go:107] acquiring lock: {Name:mk587f2a7f250f98665b7ccb25ee514dbcb058a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 16:59:24.836247   23465 cache.go:107] acquiring lock: {Name:mk610650fd2813870c0e7d2cbdcc5d2ab3b85d1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 16:59:24.836255   23465 cache.go:107] acquiring lock: {Name:mk8611d64e56ca38b14ea309e197bd873532b014 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 16:59:24.836266   23465 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0125 16:59:24.836275   23465 cache.go:107] acquiring lock: {Name:mk741e936959cd5dced7702a937cadf9480937fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 16:59:24.836319   23465 cache.go:107] acquiring lock: {Name:mk1d1642c430fef7f8f51b584326999502b47176 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 16:59:24.836381   23465 cache.go:107] acquiring lock: {Name:mk53262cb1be69a125b9a3375064dcdc142d45f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 16:59:24.836396   23465 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
	I0125 16:59:24.836380   23465 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0
	I0125 16:59:24.836411   23465 cache.go:96] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 210.31µs
	I0125 16:59:24.836434   23465 image.go:134] retrieving image: k8s.gcr.io/pause:3.2
	I0125 16:59:24.836437   23465 cache.go:80] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
	I0125 16:59:24.836494   23465 cache.go:115] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0125 16:59:24.836510   23465 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 242.316µs
	I0125 16:59:24.836535   23465 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0125 16:59:24.836590   23465 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I0125 16:59:24.836689   23465 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0
	I0125 16:59:24.836807   23465 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.7
	I0125 16:59:24.836809   23465 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0
	I0125 16:59:24.836827   23465 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0
	I0125 16:59:24.842085   23465 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist
	I0125 16:59:24.842546   23465 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.0: Error response from daemon: reference does not exist
	I0125 16:59:24.843248   23465 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist
	I0125 16:59:24.843660   23465 image.go:180] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
	I0125 16:59:24.843928   23465 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.0: Error response from daemon: reference does not exist
	I0125 16:59:24.844872   23465 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.0: Error response from daemon: reference does not exist
	I0125 16:59:24.844969   23465 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.0: Error response from daemon: reference does not exist
	I0125 16:59:24.845079   23465 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist
	I0125 16:59:24.883955   23465 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0125 16:59:24.883987   23465 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0125 16:59:24.884010   23465 cache.go:208] Successfully downloaded all kic artifacts
	I0125 16:59:24.884075   23465 start.go:313] acquiring machines lock for running-upgrade-20220125165756-11219: {Name:mk11eb6c2f452b03cf7fa02e486f142effc8c196 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 16:59:24.884235   23465 start.go:317] acquired machines lock for "running-upgrade-20220125165756-11219" in 146.139µs
	I0125 16:59:24.884265   23465 start.go:93] Skipping create...Using existing machine configuration
	I0125 16:59:24.884277   23465 fix.go:55] fixHost starting: m01
	I0125 16:59:24.884527   23465 cli_runner.go:133] Run: docker container inspect running-upgrade-20220125165756-11219 --format={{.State.Status}}
	I0125 16:59:24.998115   23465 fix.go:108] recreateIfNeeded on running-upgrade-20220125165756-11219: state=Running err=<nil>
	W0125 16:59:24.998150   23465 fix.go:134] unexpected machine state, will restart: <nil>
	I0125 16:59:25.046311   23465 out.go:176] * Updating the running docker "running-upgrade-20220125165756-11219" container ...
	I0125 16:59:25.046340   23465 machine.go:88] provisioning docker machine ...
	I0125 16:59:25.046364   23465 ubuntu.go:169] provisioning hostname "running-upgrade-20220125165756-11219"
	I0125 16:59:25.046441   23465 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220125165756-11219
	I0125 16:59:25.159352   23465 main.go:130] libmachine: Using SSH client type: native
	I0125 16:59:25.159525   23465 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 63809 <nil> <nil>}
	I0125 16:59:25.159534   23465 main.go:130] libmachine: About to run SSH command:
	sudo hostname running-upgrade-20220125165756-11219 && echo "running-upgrade-20220125165756-11219" | sudo tee /etc/hostname
	I0125 16:59:25.278429   23465 main.go:130] libmachine: SSH cmd err, output: <nil>: running-upgrade-20220125165756-11219
	
	I0125 16:59:25.278560   23465 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220125165756-11219
	I0125 16:59:25.385925   23465 main.go:130] libmachine: Using SSH client type: native
	I0125 16:59:25.386066   23465 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 63809 <nil> <nil>}
	I0125 16:59:25.386080   23465 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-20220125165756-11219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-20220125165756-11219/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-20220125165756-11219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0125 16:59:25.496976   23465 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0125 16:59:25.497001   23465 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube}
	I0125 16:59:25.497017   23465 ubuntu.go:177] setting up certificates
	I0125 16:59:25.497028   23465 provision.go:83] configureAuth start
	I0125 16:59:25.497119   23465 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20220125165756-11219
	W0125 16:59:25.520883   23465 image.go:190] authn lookup for docker.io/kubernetesui/dashboard:v2.3.1 (trying anon): GET https://index.docker.io/v2/kubernetesui/dashboard/manifests/v2.3.1: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	W0125 16:59:25.523135   23465 image.go:190] authn lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7 (trying anon): GET https://index.docker.io/v2/kubernetesui/metrics-scraper/manifests/v1.0.7: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	I0125 16:59:25.545241   23465 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/pause_3.2
	I0125 16:59:25.611181   23465 provision.go:138] copyHostCerts
	I0125 16:59:25.611260   23465 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem, removing ...
	I0125 16:59:25.611270   23465 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem
	I0125 16:59:25.611366   23465 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem (1123 bytes)
	I0125 16:59:25.611593   23465 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem, removing ...
	I0125 16:59:25.611600   23465 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem
	I0125 16:59:25.611667   23465 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem (1675 bytes)
	I0125 16:59:25.611827   23465 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem, removing ...
	I0125 16:59:25.611834   23465 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem
	I0125 16:59:25.611895   23465 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem (1082 bytes)
	I0125 16:59:25.612047   23465 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-20220125165756-11219 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-20220125165756-11219]
	I0125 16:59:25.641266   23465 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0
	I0125 16:59:25.648571   23465 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0
	I0125 16:59:25.698952   23465 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0125 16:59:25.698971   23465 cache.go:96] cache image "k8s.gcr.io/pause:3.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 862.850942ms
	I0125 16:59:25.698981   23465 cache.go:80] save to tar file k8s.gcr.io/pause:3.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0125 16:59:25.731029   23465 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0
	I0125 16:59:25.739585   23465 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7
	I0125 16:59:25.822155   23465 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0
	I0125 16:59:25.845904   23465 provision.go:172] copyRemoteCerts
	I0125 16:59:25.845966   23465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0125 16:59:25.846024   23465 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220125165756-11219
	I0125 16:59:25.917136   23465 image.go:194] remote lookup for docker.io/kubernetesui/dashboard:v2.3.1: GET https://index.docker.io/v2/kubernetesui/dashboard/manifests/v2.3.1: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	I0125 16:59:25.917181   23465 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 1.080838186s
	W0125 16:59:25.917281   23465 out.go:241] ! The image 'docker.io/kubernetesui/dashboard:v2.3.1' was not found; unable to add it to cache.
	! The image 'docker.io/kubernetesui/dashboard:v2.3.1' was not found; unable to add it to cache.
	I0125 16:59:25.918328   23465 image.go:194] remote lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: GET https://index.docker.io/v2/kubernetesui/metrics-scraper/manifests/v1.0.7: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	I0125 16:59:25.918374   23465 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 1.082297785s
	W0125 16:59:25.918471   23465 out.go:241] ! The image 'docker.io/kubernetesui/metrics-scraper:v1.0.7' was not found; unable to add it to cache.
	! The image 'docker.io/kubernetesui/metrics-scraper:v1.0.7' was not found; unable to add it to cache.
	I0125 16:59:26.004015   23465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63809 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/running-upgrade-20220125165756-11219/id_rsa Username:docker}
	I0125 16:59:26.089872   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0125 16:59:26.107202   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0125 16:59:26.125070   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0125 16:59:26.143361   23465 provision.go:86] duration metric: configureAuth took 646.317777ms
	I0125 16:59:26.143374   23465 ubuntu.go:193] setting minikube options for container-runtime
	I0125 16:59:26.143503   23465 config.go:176] Loaded profile config "running-upgrade-20220125165756-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0125 16:59:26.143565   23465 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220125165756-11219
	I0125 16:59:26.257181   23465 main.go:130] libmachine: Using SSH client type: native
	I0125 16:59:26.257351   23465 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 63809 <nil> <nil>}
	I0125 16:59:26.257359   23465 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0125 16:59:26.372925   23465 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0125 16:59:26.372939   23465 ubuntu.go:71] root file system type: overlay
	I0125 16:59:26.373141   23465 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0125 16:59:26.373253   23465 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220125165756-11219
	I0125 16:59:26.480870   23465 main.go:130] libmachine: Using SSH client type: native
	I0125 16:59:26.481020   23465 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 63809 <nil> <nil>}
	I0125 16:59:26.481080   23465 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0125 16:59:26.604513   23465 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0125 16:59:26.604626   23465 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220125165756-11219
	I0125 16:59:26.713123   23465 main.go:130] libmachine: Using SSH client type: native
	I0125 16:59:26.713262   23465 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 63809 <nil> <nil>}
	I0125 16:59:26.713275   23465 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0125 16:59:26.890719   23465 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
	I0125 16:59:26.890745   23465 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 2.05453068s
	I0125 16:59:26.890754   23465 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
	I0125 16:59:26.904688   23465 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
	I0125 16:59:26.904708   23465 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 2.06862162s
	I0125 16:59:26.904717   23465 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
	I0125 16:59:26.970724   23465 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
	I0125 16:59:26.970758   23465 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 2.134622872s
	I0125 16:59:26.970769   23465 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.7 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
	I0125 16:59:27.175330   23465 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
	I0125 16:59:27.175350   23465 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 2.339181462s
	I0125 16:59:27.175359   23465 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
	I0125 16:59:27.773812   23465 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
	I0125 16:59:27.773830   23465 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 2.937646422s
	I0125 16:59:27.773849   23465 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
	I0125 16:59:27.773863   23465 cache.go:87] Successfully saved all images to host disk.
	I0125 16:59:50.691093   23465 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-01-26 00:58:13.488757005 +0000
	+++ /lib/systemd/system/docker.service.new	2022-01-26 00:59:26.613382054 +0000
	@@ -5,9 +5,12 @@
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -23,7 +26,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	
	I0125 16:59:50.691112   23465 machine.go:91] provisioned docker machine in 25.644668881s
	I0125 16:59:50.691122   23465 start.go:267] post-start starting for "running-upgrade-20220125165756-11219" (driver="docker")
	I0125 16:59:50.691127   23465 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0125 16:59:50.691192   23465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0125 16:59:50.691250   23465 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220125165756-11219
	I0125 16:59:50.816590   23465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63809 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/running-upgrade-20220125165756-11219/id_rsa Username:docker}
	I0125 16:59:50.942949   23465 ssh_runner.go:195] Run: cat /etc/os-release
	I0125 16:59:50.948011   23465 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0125 16:59:50.948030   23465 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0125 16:59:50.948039   23465 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0125 16:59:50.948051   23465 info.go:137] Remote host: Ubuntu 19.10
	I0125 16:59:50.948065   23465 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/addons for local assets ...
	I0125 16:59:50.948170   23465 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files for local assets ...
	I0125 16:59:50.948343   23465 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem -> 112192.pem in /etc/ssl/certs
	I0125 16:59:50.948526   23465 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0125 16:59:50.958251   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem --> /etc/ssl/certs/112192.pem (1708 bytes)
	I0125 16:59:51.035303   23465 start.go:270] post-start completed in 344.171583ms
	I0125 16:59:51.035372   23465 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0125 16:59:51.035449   23465 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220125165756-11219
	I0125 16:59:51.157067   23465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63809 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/running-upgrade-20220125165756-11219/id_rsa Username:docker}
	I0125 16:59:51.343903   23465 fix.go:57] fixHost completed within 26.459521766s
	I0125 16:59:51.343934   23465 start.go:80] releasing machines lock for "running-upgrade-20220125165756-11219", held for 26.459586245s
	I0125 16:59:51.344099   23465 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-20220125165756-11219
	I0125 16:59:51.465779   23465 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0125 16:59:51.465816   23465 ssh_runner.go:195] Run: systemctl --version
	I0125 16:59:51.465887   23465 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220125165756-11219
	I0125 16:59:51.465894   23465 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-20220125165756-11219
	I0125 16:59:51.590194   23465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63809 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/running-upgrade-20220125165756-11219/id_rsa Username:docker}
	I0125 16:59:51.590442   23465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63809 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/running-upgrade-20220125165756-11219/id_rsa Username:docker}
	I0125 16:59:51.949121   23465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0125 16:59:51.960364   23465 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0125 16:59:51.974167   23465 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0125 16:59:51.974303   23465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0125 16:59:52.035569   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0125 16:59:52.067799   23465 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0125 16:59:52.257151   23465 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0125 16:59:52.362368   23465 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0125 16:59:52.375599   23465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0125 16:59:52.536486   23465 ssh_runner.go:195] Run: sudo systemctl start docker
	I0125 16:59:52.549938   23465 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0125 16:59:52.657449   23465 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0125 16:59:52.862409   23465 out.go:203] * Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
	I0125 16:59:52.862532   23465 cli_runner.go:133] Run: docker exec -t running-upgrade-20220125165756-11219 dig +short host.docker.internal
	I0125 16:59:53.039632   23465 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0125 16:59:53.040083   23465 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0125 16:59:53.045577   23465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0125 16:59:53.055522   23465 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" running-upgrade-20220125165756-11219
	I0125 16:59:53.193120   23465 out.go:176]   - kubeadm.pod-network-cidr=10.244.0.0/16
	I0125 16:59:53.193210   23465 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime docker
	I0125 16:59:53.193296   23465 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0125 16:59:53.235715   23465 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.0
	k8s.gcr.io/kube-scheduler:v1.18.0
	k8s.gcr.io/kube-controller-manager:v1.18.0
	k8s.gcr.io/kube-apiserver:v1.18.0
	kubernetesui/dashboard:v2.0.0-rc6
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	kindest/kindnetd:0.5.3
	k8s.gcr.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	
	-- /stdout --
	I0125 16:59:53.235769   23465 docker.go:612] gcr.io/k8s-minikube/storage-provisioner:v5 wasn't preloaded
	I0125 16:59:53.235777   23465 cache_images.go:88] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.3.1 docker.io/kubernetesui/metrics-scraper:v1.0.7]
	I0125 16:59:53.242167   23465 image.go:134] retrieving image: k8s.gcr.io/etcd:3.4.3-0
	I0125 16:59:53.242169   23465 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0
	I0125 16:59:53.242755   23465 image.go:134] retrieving image: k8s.gcr.io/pause:3.2
	I0125 16:59:53.243229   23465 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.7
	I0125 16:59:53.244057   23465 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0125 16:59:53.245307   23465 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0
	I0125 16:59:53.245725   23465 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I0125 16:59:53.246040   23465 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0125 16:59:53.246848   23465 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0
	I0125 16:59:53.247296   23465 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0
	I0125 16:59:53.251856   23465 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
	I0125 16:59:53.253034   23465 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.0: Error response from daemon: reference does not exist
	I0125 16:59:53.254995   23465 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: reference does not exist
	I0125 16:59:53.255013   23465 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist
	I0125 16:59:53.255024   23465 image.go:180] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist
	I0125 16:59:53.255142   23465 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: reference does not exist
	I0125 16:59:53.255147   23465 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.0: Error response from daemon: reference does not exist
	I0125 16:59:53.255375   23465 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: reference does not exist
	I0125 16:59:53.256121   23465 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.0: Error response from daemon: reference does not exist
	I0125 16:59:53.256199   23465 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.0: Error response from daemon: reference does not exist
	I0125 16:59:53.819749   23465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
	I0125 16:59:53.860394   23465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.18.0
	I0125 16:59:53.907056   23465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/pause:3.2
	W0125 16:59:53.929661   23465 image.go:190] authn lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7 (trying anon): GET https://index.docker.io/v2/kubernetesui/metrics-scraper/manifests/v1.0.7: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	I0125 16:59:53.939406   23465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.7
	W0125 16:59:53.990766   23465 image.go:190] authn lookup for docker.io/kubernetesui/dashboard:v2.3.1 (trying anon): GET https://index.docker.io/v2/kubernetesui/dashboard/manifests/v2.3.1: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	I0125 16:59:54.011318   23465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.18.0
	I0125 16:59:54.035307   23465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.18.0
	I0125 16:59:54.059513   23465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.18.0
	I0125 16:59:54.154501   23465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0125 16:59:54.192037   23465 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0125 16:59:54.192071   23465 docker.go:287] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0125 16:59:54.192219   23465 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0125 16:59:54.232364   23465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0125 16:59:54.233188   23465 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0125 16:59:54.237833   23465 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0125 16:59:54.237862   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0125 16:59:54.320620   23465 image.go:194] remote lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: GET https://index.docker.io/v2/kubernetesui/metrics-scraper/manifests/v1.0.7: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	I0125 16:59:54.320643   23465 image.go:93] error retrieve Image docker.io/kubernetesui/metrics-scraper:v1.0.7 ref Error response from daemon: reference does not exist 
	I0125 16:59:54.320660   23465 cache_images.go:116] "docker.io/kubernetesui/metrics-scraper:v1.0.7" needs transfer: got empty img digest "" for docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0125 16:59:54.320680   23465 docker.go:287] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0125 16:59:54.320758   23465 ssh_runner.go:195] Run: docker rmi docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0125 16:59:54.386622   23465 image.go:194] remote lookup for docker.io/kubernetesui/dashboard:v2.3.1: GET https://index.docker.io/v2/kubernetesui/dashboard/manifests/v2.3.1: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	I0125 16:59:54.386650   23465 image.go:93] error retrieve Image docker.io/kubernetesui/dashboard:v2.3.1 ref Error response from daemon: reference does not exist 
	I0125 16:59:54.386668   23465 cache_images.go:116] "docker.io/kubernetesui/dashboard:v2.3.1" needs transfer: got empty img digest "" for docker.io/kubernetesui/dashboard:v2.3.1
	I0125 16:59:54.386687   23465 docker.go:287] Removing image: docker.io/kubernetesui/dashboard:v2.3.1
	I0125 16:59:54.386779   23465 ssh_runner.go:195] Run: docker rmi docker.io/kubernetesui/dashboard:v2.3.1
	I0125 16:59:54.399950   23465 docker.go:254] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0125 16:59:54.399969   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0125 16:59:54.421397   23465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7
	I0125 16:59:54.444669   23465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1
	I0125 16:59:55.253890   23465 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0125 16:59:55.253945   23465 cache_images.go:92] LoadImages completed in 2.018152241s
	W0125 16:59:55.254063   23465 out.go:241] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7: no such file or directory
	I0125 16:59:55.254156   23465 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0125 16:59:55.300048   23465 cni.go:93] Creating CNI manager for ""
	I0125 16:59:55.300062   23465 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0125 16:59:55.300073   23465 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0125 16:59:55.300084   23465 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-20220125165756-11219 NodeName:running-upgrade-20220125165756-11219 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:172.17.0.2 CgroupDriver:cgroupfs ClientCAFi
le:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0125 16:59:55.300194   23465 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.0.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "running-upgrade-20220125165756-11219"
	  kubeletExtraArgs:
	    node-ip: 172.17.0.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0125 16:59:55.300267   23465 kubeadm.go:791] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=running-upgrade-20220125165756-11219 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20220125165756-11219 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0125 16:59:55.300337   23465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
	I0125 16:59:55.308640   23465 binaries.go:44] Found k8s binaries, skipping transfer
	I0125 16:59:55.308700   23465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0125 16:59:55.317962   23465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (360 bytes)
	I0125 16:59:55.330742   23465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0125 16:59:55.344884   23465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2073 bytes)
	I0125 16:59:55.359703   23465 ssh_runner.go:195] Run: grep 172.17.0.2	control-plane.minikube.internal$ /etc/hosts
	I0125 16:59:55.364178   23465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.0.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0125 16:59:55.374397   23465 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219 for IP: 172.17.0.2
	I0125 16:59:55.374517   23465 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.key
	I0125 16:59:55.374574   23465 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.key
	I0125 16:59:55.374680   23465 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/client.key
	I0125 16:59:55.374702   23465 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/apiserver.key.7b749c5f
	I0125 16:59:55.374721   23465 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/apiserver.crt.7b749c5f with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0125 16:59:55.443708   23465 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/apiserver.crt.7b749c5f ...
	I0125 16:59:55.443723   23465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/apiserver.crt.7b749c5f: {Name:mk5c421452acf7044bef22d1305b0ac70909985c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 16:59:55.444027   23465 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/apiserver.key.7b749c5f ...
	I0125 16:59:55.444035   23465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/apiserver.key.7b749c5f: {Name:mkd65fc85e7f1571b121a0fdbf93d28145de5700 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 16:59:55.444216   23465 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/apiserver.crt.7b749c5f -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/apiserver.crt
	I0125 16:59:55.444402   23465 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/apiserver.key.7b749c5f -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/apiserver.key
	I0125 16:59:55.444610   23465 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/proxy-client.key
	I0125 16:59:55.444828   23465 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219.pem (1338 bytes)
	W0125 16:59:55.444872   23465 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219_empty.pem, impossibly tiny 0 bytes
	I0125 16:59:55.444882   23465 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem (1675 bytes)
	I0125 16:59:55.444917   23465 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem (1082 bytes)
	I0125 16:59:55.444952   23465 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem (1123 bytes)
	I0125 16:59:55.444985   23465 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem (1675 bytes)
	I0125 16:59:55.445057   23465 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem (1708 bytes)
	I0125 16:59:55.445898   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0125 16:59:55.469495   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0125 16:59:55.485928   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0125 16:59:55.503018   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0125 16:59:55.519104   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0125 16:59:55.535524   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0125 16:59:55.554979   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0125 16:59:55.572474   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0125 16:59:55.589428   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0125 16:59:55.606993   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219.pem --> /usr/share/ca-certificates/11219.pem (1338 bytes)
	I0125 16:59:55.624602   23465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem --> /usr/share/ca-certificates/112192.pem (1708 bytes)
	I0125 16:59:55.641483   23465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0125 16:59:55.654579   23465 ssh_runner.go:195] Run: openssl version
	I0125 16:59:55.660565   23465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0125 16:59:55.669145   23465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0125 16:59:55.673281   23465 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 26 00:00 /usr/share/ca-certificates/minikubeCA.pem
	I0125 16:59:55.673351   23465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0125 16:59:55.679662   23465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0125 16:59:55.687163   23465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11219.pem && ln -fs /usr/share/ca-certificates/11219.pem /etc/ssl/certs/11219.pem"
	I0125 16:59:55.695103   23465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11219.pem
	I0125 16:59:55.699769   23465 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 26 00:05 /usr/share/ca-certificates/11219.pem
	I0125 16:59:55.699839   23465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11219.pem
	I0125 16:59:55.706167   23465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11219.pem /etc/ssl/certs/51391683.0"
	I0125 16:59:55.714211   23465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112192.pem && ln -fs /usr/share/ca-certificates/112192.pem /etc/ssl/certs/112192.pem"
	I0125 16:59:55.722268   23465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112192.pem
	I0125 16:59:55.726365   23465 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 26 00:05 /usr/share/ca-certificates/112192.pem
	I0125 16:59:55.726409   23465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112192.pem
	I0125 16:59:55.732402   23465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112192.pem /etc/ssl/certs/3ec20f2e.0"
	I0125 16:59:55.741802   23465 kubeadm.go:388] StartCluster: {Name:running-upgrade-20220125165756-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-20220125165756-11219 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false}
	I0125 16:59:55.741933   23465 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0125 16:59:55.779417   23465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0125 16:59:55.789829   23465 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0125 16:59:55.797313   23465 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0125 16:59:55.797397   23465 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" running-upgrade-20220125165756-11219
	I0125 16:59:55.904746   23465 kubeconfig.go:116] verify returned: extract IP: "running-upgrade-20220125165756-11219" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 16:59:55.904928   23465 kubeconfig.go:127] "running-upgrade-20220125165756-11219" context is missing from /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig - will repair!
	I0125 16:59:55.906671   23465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig: {Name:mk22ac11166e634b93c7a48f1f20a682ee77d8e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 16:59:55.907383   23465 kapi.go:59] client config for running-upgrade-20220125165756-11219: &rest.Config{Host:"https://127.0.0.1:63811", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-20220125165756-11219/client.crt", KeyFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/running-upgrade-202
20125165756-11219/client.key", CAFile:"/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21cd640), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0125 16:59:55.910175   23465 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0125 16:59:55.918341   23465 kubeadm.go:593] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2022-01-26 00:59:01.975488074 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2022-01-26 00:59:55.366485049 +0000
	@@ -23,16 +23,52 @@
	   certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+controllerManager:
	+  extraArgs:
	+    allocate-node-cidrs: "true"
	+    leader-elect: "false"
	+scheduler:
	+  extraArgs:
	+    leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	-controlPlaneEndpoint: 172.17.0.2:8443
	+controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	+    extraArgs:
	+      proxy-refresh-interval: "70000"
	 kubernetesVersion: v1.18.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	   serviceSubnet: 10.96.0.0/12
	+---
	+apiVersion: kubelet.config.k8s.io/v1beta1
	+kind: KubeletConfiguration
	+authentication:
	+  x509:
	+    clientCAFile: /var/lib/minikube/certs/ca.crt
	+cgroupDriver: cgroupfs
	+clusterDomain: "cluster.local"
	+# disable disk resource management by default
	+imageGCHighThresholdPercent: 100
	+evictionHard:
	+  nodefs.available: "0%"
	+  nodefs.inodesFree: "0%"
	+  imagefs.available: "0%"
	+failSwapOn: false
	+staticPodPath: /etc/kubernetes/manifests
	+---
	+apiVersion: kubeproxy.config.k8s.io/v1alpha1
	+kind: KubeProxyConfiguration
	+clusterCIDR: "10.244.0.0/16"
	+metricsBindAddress: 0.0.0.0:10249
	+conntrack:
	+  maxPerCore: 0
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	+  tcpEstablishedTimeout: 0s
	+# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	+  tcpCloseWaitTimeout: 0s
	
	-- /stdout --
	I0125 16:59:55.918388   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0125 17:00:50.641429   23465 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (54.722817306s)
	I0125 17:00:50.641499   23465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0125 17:00:50.651861   23465 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0125 17:00:50.660553   23465 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0125 17:00:50.660609   23465 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0125 17:00:50.668159   23465 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0125 17:00:50.668189   23465 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	W0125 17:00:50.937719   23465 out.go:241] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0126 01:00:50.719601    5657 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0126 01:00:50.719601    5657 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I0125 17:00:50.937745   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0125 17:00:51.024420   23465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0125 17:00:51.034204   23465 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0125 17:00:51.034261   23465 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0125 17:00:51.041804   23465 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0125 17:00:51.041829   23465 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0125 17:00:51.315365   23465 kubeadm.go:390] StartCluster complete in 55.573352274s
	I0125 17:00:51.315476   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0125 17:00:51.354845   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.354857   23465 logs.go:276] No container was found matching "kube-apiserver"
	I0125 17:00:51.354966   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0125 17:00:51.392838   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.392872   23465 logs.go:276] No container was found matching "etcd"
	I0125 17:00:51.392981   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0125 17:00:51.430867   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.430882   23465 logs.go:276] No container was found matching "coredns"
	I0125 17:00:51.430968   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0125 17:00:51.469319   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.469332   23465 logs.go:276] No container was found matching "kube-scheduler"
	I0125 17:00:51.469420   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0125 17:00:51.507689   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.507702   23465 logs.go:276] No container was found matching "kube-proxy"
	I0125 17:00:51.507810   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0125 17:00:51.546710   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.546736   23465 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0125 17:00:51.546862   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0125 17:00:51.584481   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.584494   23465 logs.go:276] No container was found matching "storage-provisioner"
	I0125 17:00:51.584627   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0125 17:00:51.622958   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.622971   23465 logs.go:276] No container was found matching "kube-controller-manager"
	I0125 17:00:51.622978   23465 logs.go:123] Gathering logs for Docker ...
	I0125 17:00:51.622985   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0125 17:00:51.665886   23465 logs.go:123] Gathering logs for container status ...
	I0125 17:00:51.665902   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0125 17:00:53.756959   23465 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.091036951s)
	I0125 17:00:53.757083   23465 logs.go:123] Gathering logs for kubelet ...
	I0125 17:00:53.757091   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0125 17:00:53.820621   23465 logs.go:138] Found kubelet problem: Jan 26 00:59:51 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:51.753158    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	W0125 17:00:53.820847   23465 logs.go:138] Found kubelet problem: Jan 26 00:59:52 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:52.857684    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	W0125 17:00:53.821081   23465 logs.go:138] Found kubelet problem: Jan 26 00:59:52 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:52.866466    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.821403   23465 logs.go:138] Found kubelet problem: Jan 26 00:59:53 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:53.876628    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.821628   23465 logs.go:138] Found kubelet problem: Jan 26 00:59:54 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:54.880413    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.822110   23465 logs.go:138] Found kubelet problem: Jan 26 00:59:56 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:56.958614    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	W0125 17:00:53.822345   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:05 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:05.376602    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.822569   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:08 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:08.376394    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	W0125 17:00:53.823258   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:21 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:21.016916    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.824809   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:24 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:24.055560    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 40s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	W0125 17:00:53.825035   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:24 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:24.549477    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.825252   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:26 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:26.936774    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 40s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	W0125 17:00:53.830196   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:38 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:38.354375    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.830414   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:38 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:38.354933    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 40s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	I0125 17:00:53.830537   23465 logs.go:123] Gathering logs for dmesg ...
	I0125 17:00:53.830545   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0125 17:00:53.852696   23465 logs.go:123] Gathering logs for describe nodes ...
	I0125 17:00:53.878411   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0125 17:00:53.937511   23465 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0125 17:00:53.937537   23465 out.go:370] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0126 01:00:51.091843    5767 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W0125 17:00:53.937548   23465 out.go:241] * 
	* 
	W0125 17:00:53.937639   23465 out.go:241] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0126 01:00:51.091843    5767 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0126 01:00:51.091843    5767 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0125 17:00:53.937654   23465 out.go:241] * 
	* 
	W0125 17:00:53.938236   23465 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0125 17:00:54.010946   23465 out.go:176] X Problems detected in kubelet:
	I0125 17:00:54.057771   23465 out.go:176]   Jan 26 00:59:51 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:51.753158    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	I0125 17:00:54.130799   23465 out.go:176]   Jan 26 00:59:52 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:52.857684    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	I0125 17:00:54.176977   23465 out.go:176]   Jan 26 00:59:52 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:52.866466    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	I0125 17:00:54.222947   23465 out.go:176] 
	W0125 17:00:54.223098   23465 out.go:241] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0126 01:00:51.091843    5767 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0126 01:00:51.091843    5767 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0125 17:00:54.223229   23465 out.go:241] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	* Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W0125 17:00:54.223292   23465 out.go:241] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	* Related issue: https://github.com/kubernetes/minikube/issues/5484
	I0125 17:00:54.248844   23465 out.go:176] 

                                                
                                                
** /stderr **
version_upgrade_test.go:139: upgrade from v1.9.0 to HEAD failed: out/minikube-darwin-amd64 start -p running-upgrade-20220125165756-11219 --memory=2200 --alsologtostderr -v=1 --driver=docker : exit status 81
panic.go:642: *** TestRunningBinaryUpgrade FAILED at 2022-01-25 17:00:54.292628 -0800 PST m=+3744.915382133
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect running-upgrade-20220125165756-11219
helpers_test.go:236: (dbg) docker inspect running-upgrade-20220125165756-11219:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7133d9d2ffc55c61972f1ebd494bfa83aaa67b85f6ae1a5b5debf6680965e0dc",
	        "Created": "2022-01-26T00:58:01.184946243Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248283,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-01-26T00:58:05.092110024Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/7133d9d2ffc55c61972f1ebd494bfa83aaa67b85f6ae1a5b5debf6680965e0dc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7133d9d2ffc55c61972f1ebd494bfa83aaa67b85f6ae1a5b5debf6680965e0dc/hostname",
	        "HostsPath": "/var/lib/docker/containers/7133d9d2ffc55c61972f1ebd494bfa83aaa67b85f6ae1a5b5debf6680965e0dc/hosts",
	        "LogPath": "/var/lib/docker/containers/7133d9d2ffc55c61972f1ebd494bfa83aaa67b85f6ae1a5b5debf6680965e0dc/7133d9d2ffc55c61972f1ebd494bfa83aaa67b85f6ae1a5b5debf6680965e0dc-json.log",
	        "Name": "/running-upgrade-20220125165756-11219",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-20220125165756-11219:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f4b9f093fe64566915b12926c708fc355c3d3a343e2244b780b14810379c62f7-init/diff:/var/lib/docker/overlay2/49c628c641cadffbece21a0bd091405a047c608b266b510ab41a9de2dc3073e4/diff:/var/lib/docker/overlay2/d83caf0ef98535f5b53b8bc444f0edde3c65aee825d60acc65b573fca595e2ff/diff:/var/lib/docker/overlay2/60c17b491166ae22cfe9f643d47a55045ec6492d28c4e9f450bf17f92ec6f174/diff:/var/lib/docker/overlay2/1ef19cbc5068cf9413adeb04908f45525fd0dda2d15c8de8adf3494151647ec8/diff:/var/lib/docker/overlay2/55666c122bf7cee25a540a317fb6f205c153427e02c23b72cf410d88dbb402a1/diff:/var/lib/docker/overlay2/48b00ee424f8e9bf307c7bee0a80566f42834df2157b563f1953d682759f5581/diff:/var/lib/docker/overlay2/2974ba2e56173ef53a7a6dd5af401b7ce317f08481e1283b67a1126ae73692b0/diff:/var/lib/docker/overlay2/ed59a3f6e2c9846b9b1fd95c4935f6016feb66327ac67001da59ae35951763c2/diff:/var/lib/docker/overlay2/2dd9a1b9891da2f220b172655ecc42a8abbe6644f47784941b01503067460bfe/diff:/var/lib/docker/overlay2/61bd79
f443b17b9c1856c64823bb8cdfd1030f8457c6d1c919ce8f793be867bf/diff:/var/lib/docker/overlay2/52da89b89c15a36a2ec86c91159a9c51d22d8e5c9eca2786c289f293e0fe969e/diff:/var/lib/docker/overlay2/915d6e55e63b3b93d804681482be482b9bf14ac5aef4fd555722ee1b1f26c716/diff:/var/lib/docker/overlay2/581bbcf3dcb7fce16a85660ed72a9a89902582230b7d70f6f1aa40e44ac116e4/diff:/var/lib/docker/overlay2/bab86f15ce331fd65cf5d5578ee52611cb71c998e429864064edb11c5692c31c/diff:/var/lib/docker/overlay2/54837fd2c973dda3ea80d82e25a4d27f289aafb43dbac90e0040b1ece863bb66/diff:/var/lib/docker/overlay2/ad3285600a01bc2a5e9e1cabfa55b74f020b57832af202cf6959f711771bb399/diff:/var/lib/docker/overlay2/c421c61c215d25ecc91c29ad61dc9768d7146e0671daa8c68548a385ab2bb936/diff:/var/lib/docker/overlay2/f714cf85cf8d4310234a1a3adc6fff23c386746e23fe15834bfa6a203b7b9825/diff:/var/lib/docker/overlay2/675b275d616392abbce47107cd5c44d746e06e0f63b3f34884deff6f6f678e9e/diff:/var/lib/docker/overlay2/12c3dfce5b403b14d05be7075247c9cc1815c96a5c08166889a4634133e659e0/diff:/var/lib/d
ocker/overlay2/1a9bc59a6b31072fc3ce48473082f6e769299ef6031167b5ec4c78f61b76012e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f4b9f093fe64566915b12926c708fc355c3d3a343e2244b780b14810379c62f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f4b9f093fe64566915b12926c708fc355c3d3a343e2244b780b14810379c62f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f4b9f093fe64566915b12926c708fc355c3d3a343e2244b780b14810379c62f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-20220125165756-11219",
	                "Source": "/var/lib/docker/volumes/running-upgrade-20220125165756-11219/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-20220125165756-11219",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-20220125165756-11219",
	                "name.minikube.sigs.k8s.io": "running-upgrade-20220125165756-11219",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6bba689cbd460dd92c51f31c1087d834c24df5bddb4b4c12591b84443cb86d64",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63809"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63811"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6bba689cbd46",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "afa80f8ce307d85d1c220d44014bbb50d54a22c6aec0bd37d4e532ce3a27490b",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "e13d41a56702febea08f18fc00f9b8fafb171198693557f36279f90385e5f125",
	                    "EndpointID": "afa80f8ce307d85d1c220d44014bbb50d54a22c6aec0bd37d4e532ce3a27490b",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220125165756-11219 -n running-upgrade-20220125165756-11219
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-20220125165756-11219 -n running-upgrade-20220125165756-11219: exit status 2 (594.626449ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p running-upgrade-20220125165756-11219 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p running-upgrade-20220125165756-11219 logs -n 25: (3.637966424s)
helpers_test.go:253: TestRunningBinaryUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|-------------------------------------------|-------------------------------------------|----------|---------|-------------------------------|-------------------------------|
	|  Command   |                   Args                    |                  Profile                  |   User   | Version |          Start Time           |           End Time            |
	|------------|-------------------------------------------|-------------------------------------------|----------|---------|-------------------------------|-------------------------------|
	| delete     | -p                                        | scheduled-stop-20220125164823-11219       | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:50:49 PST | Tue, 25 Jan 2022 16:50:55 PST |
	|            | scheduled-stop-20220125164823-11219       |                                           |          |         |                               |                               |
	| start      | -p                                        | skaffold-20220125165055-11219             | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:50:56 PST | Tue, 25 Jan 2022 16:52:10 PST |
	|            | skaffold-20220125165055-11219             |                                           |          |         |                               |                               |
	|            | --memory=2600 --driver=docker             |                                           |          |         |                               |                               |
	| docker-env | --shell none -p                           | skaffold-20220125165055-11219             | skaffold | v1.25.1 | Tue, 25 Jan 2022 16:52:11 PST | Tue, 25 Jan 2022 16:52:12 PST |
	|            | skaffold-20220125165055-11219             |                                           |          |         |                               |                               |
	|            | --user=skaffold                           |                                           |          |         |                               |                               |
	| -p         | skaffold-20220125165055-11219             | skaffold-20220125165055-11219             | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:52:14 PST | Tue, 25 Jan 2022 16:52:15 PST |
	|            | logs -n 25                                |                                           |          |         |                               |                               |
	| delete     | -p                                        | skaffold-20220125165055-11219             | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:52:18 PST | Tue, 25 Jan 2022 16:52:31 PST |
	|            | skaffold-20220125165055-11219             |                                           |          |         |                               |                               |
	| delete     | -p                                        | insufficient-storage-20220125165231-11219 | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:53:22 PST | Tue, 25 Jan 2022 16:53:34 PST |
	|            | insufficient-storage-20220125165231-11219 |                                           |          |         |                               |                               |
	| delete     | -p                                        | flannel-20220125165334-11219              | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:53:34 PST | Tue, 25 Jan 2022 16:53:35 PST |
	|            | flannel-20220125165334-11219              |                                           |          |         |                               |                               |
	| start      | -p                                        | force-systemd-env-20220125165400-11219    | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:54:00 PST | Tue, 25 Jan 2022 16:55:07 PST |
	|            | force-systemd-env-20220125165400-11219    |                                           |          |         |                               |                               |
	|            | --memory=2048 --alsologtostderr -v=5      |                                           |          |         |                               |                               |
	|            | --driver=docker                           |                                           |          |         |                               |                               |
	| -p         | force-systemd-env-20220125165400-11219    | force-systemd-env-20220125165400-11219    | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:55:07 PST | Tue, 25 Jan 2022 16:55:08 PST |
	|            | ssh docker info --format                  |                                           |          |         |                               |                               |
	|            | {{.CgroupDriver}}                         |                                           |          |         |                               |                               |
	| delete     | -p                                        | force-systemd-env-20220125165400-11219    | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:55:08 PST | Tue, 25 Jan 2022 16:55:23 PST |
	|            | force-systemd-env-20220125165400-11219    |                                           |          |         |                               |                               |
	| start      | -p                                        | offline-docker-20220125165334-11219       | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:53:34 PST | Tue, 25 Jan 2022 16:55:24 PST |
	|            | offline-docker-20220125165334-11219       |                                           |          |         |                               |                               |
	|            | --alsologtostderr -v=1                    |                                           |          |         |                               |                               |
	|            | --memory=2048 --wait=true                 |                                           |          |         |                               |                               |
	|            | --driver=docker                           |                                           |          |         |                               |                               |
	| delete     | -p                                        | offline-docker-20220125165334-11219       | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:55:24 PST | Tue, 25 Jan 2022 16:55:41 PST |
	|            | offline-docker-20220125165334-11219       |                                           |          |         |                               |                               |
	| start      | -p                                        | docker-flags-20220125165541-11219         | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:55:41 PST | Tue, 25 Jan 2022 16:56:27 PST |
	|            | docker-flags-20220125165541-11219         |                                           |          |         |                               |                               |
	|            | --cache-images=false                      |                                           |          |         |                               |                               |
	|            | --memory=2048                             |                                           |          |         |                               |                               |
	|            | --install-addons=false                    |                                           |          |         |                               |                               |
	|            | --wait=false --docker-env=FOO=BAR         |                                           |          |         |                               |                               |
	|            | --docker-env=BAZ=BAT                      |                                           |          |         |                               |                               |
	|            | --docker-opt=debug                        |                                           |          |         |                               |                               |
	|            | --docker-opt=icc=true                     |                                           |          |         |                               |                               |
	|            | --alsologtostderr -v=5                    |                                           |          |         |                               |                               |
	|            | --driver=docker                           |                                           |          |         |                               |                               |
	| -p         | docker-flags-20220125165541-11219         | docker-flags-20220125165541-11219         | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:56:27 PST | Tue, 25 Jan 2022 16:56:28 PST |
	|            | ssh sudo systemctl show docker            |                                           |          |         |                               |                               |
	|            | --property=Environment --no-pager         |                                           |          |         |                               |                               |
	| -p         | docker-flags-20220125165541-11219         | docker-flags-20220125165541-11219         | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:56:28 PST | Tue, 25 Jan 2022 16:56:28 PST |
	|            | ssh sudo systemctl show docker            |                                           |          |         |                               |                               |
	|            | --property=ExecStart --no-pager           |                                           |          |         |                               |                               |
	| start      | -p                                        | force-systemd-flag-20220125165523-11219   | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:55:23 PST | Tue, 25 Jan 2022 16:56:29 PST |
	|            | force-systemd-flag-20220125165523-11219   |                                           |          |         |                               |                               |
	|            | --memory=2048 --force-systemd             |                                           |          |         |                               |                               |
	|            | --alsologtostderr -v=5 --driver=docker    |                                           |          |         |                               |                               |
	| -p         | force-systemd-flag-20220125165523-11219   | force-systemd-flag-20220125165523-11219   | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:56:30 PST | Tue, 25 Jan 2022 16:56:30 PST |
	|            | ssh docker info --format                  |                                           |          |         |                               |                               |
	|            | {{.CgroupDriver}}                         |                                           |          |         |                               |                               |
	| delete     | -p                                        | docker-flags-20220125165541-11219         | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:56:29 PST | Tue, 25 Jan 2022 16:56:43 PST |
	|            | docker-flags-20220125165541-11219         |                                           |          |         |                               |                               |
	| delete     | -p                                        | force-systemd-flag-20220125165523-11219   | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:56:30 PST | Tue, 25 Jan 2022 16:56:46 PST |
	|            | force-systemd-flag-20220125165523-11219   |                                           |          |         |                               |                               |
	| start      | -p                                        | cert-expiration-20220125165643-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:56:43 PST | Tue, 25 Jan 2022 16:57:41 PST |
	|            | cert-expiration-20220125165643-11219      |                                           |          |         |                               |                               |
	|            | --memory=2048 --cert-expiration=3m        |                                           |          |         |                               |                               |
	|            | --driver=docker                           |                                           |          |         |                               |                               |
	| start      | -p                                        | cert-options-20220125165646-11219         | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:56:46 PST | Tue, 25 Jan 2022 16:57:42 PST |
	|            | cert-options-20220125165646-11219         |                                           |          |         |                               |                               |
	|            | --memory=2048                             |                                           |          |         |                               |                               |
	|            | --apiserver-ips=127.0.0.1                 |                                           |          |         |                               |                               |
	|            | --apiserver-ips=192.168.15.15             |                                           |          |         |                               |                               |
	|            | --apiserver-names=localhost               |                                           |          |         |                               |                               |
	|            | --apiserver-names=www.google.com          |                                           |          |         |                               |                               |
	|            | --apiserver-port=8555                     |                                           |          |         |                               |                               |
	|            | --driver=docker                           |                                           |          |         |                               |                               |
	|            | --apiserver-name=localhost                |                                           |          |         |                               |                               |
	| -p         | cert-options-20220125165646-11219         | cert-options-20220125165646-11219         | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:57:42 PST | Tue, 25 Jan 2022 16:57:43 PST |
	|            | ssh openssl x509 -text -noout -in         |                                           |          |         |                               |                               |
	|            | /var/lib/minikube/certs/apiserver.crt     |                                           |          |         |                               |                               |
	| ssh        | -p                                        | cert-options-20220125165646-11219         | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:57:43 PST | Tue, 25 Jan 2022 16:57:43 PST |
	|            | cert-options-20220125165646-11219         |                                           |          |         |                               |                               |
	|            | -- sudo cat                               |                                           |          |         |                               |                               |
	|            | /etc/kubernetes/admin.conf                |                                           |          |         |                               |                               |
	| delete     | -p                                        | cert-options-20220125165646-11219         | jenkins  | v1.25.1 | Tue, 25 Jan 2022 16:57:44 PST | Tue, 25 Jan 2022 16:57:56 PST |
	|            | cert-options-20220125165646-11219         |                                           |          |         |                               |                               |
	| start      | -p                                        | cert-expiration-20220125165643-11219      | jenkins  | v1.25.1 | Tue, 25 Jan 2022 17:00:41 PST | Tue, 25 Jan 2022 17:00:47 PST |
	|            | cert-expiration-20220125165643-11219      |                                           |          |         |                               |                               |
	|            | --memory=2048                             |                                           |          |         |                               |                               |
	|            | --cert-expiration=8760h                   |                                           |          |         |                               |                               |
	|            | --driver=docker                           |                                           |          |         |                               |                               |
	|------------|-------------------------------------------|-------------------------------------------|----------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/25 17:00:41
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0125 17:00:41.260066   23624 out.go:297] Setting OutFile to fd 1 ...
	I0125 17:00:41.260200   23624 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 17:00:41.260202   23624 out.go:310] Setting ErrFile to fd 2...
	I0125 17:00:41.260204   23624 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 17:00:41.260275   23624 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/bin
	I0125 17:00:41.261874   23624 out.go:304] Setting JSON to false
	I0125 17:00:41.286097   23624 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":9016,"bootTime":1643149825,"procs":326,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0125 17:00:41.286188   23624 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0125 17:00:41.314613   23624 out.go:176] * [cert-expiration-20220125165643-11219] minikube v1.25.1 on Darwin 11.1
	I0125 17:00:41.314722   23624 notify.go:174] Checking for updates...
	I0125 17:00:41.361779   23624 out.go:176]   - MINIKUBE_LOCATION=13326
	I0125 17:00:41.388640   23624 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 17:00:41.414781   23624 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0125 17:00:41.440781   23624 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0125 17:00:41.466663   23624 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	I0125 17:00:41.467411   23624 config.go:176] Loaded profile config "cert-expiration-20220125165643-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 17:00:41.468062   23624 driver.go:344] Setting default libvirt URI to qemu:///system
	I0125 17:00:41.562443   23624 docker.go:132] docker version: linux-20.10.5
	I0125 17:00:41.562672   23624 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 17:00:41.723754   23624 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:54 SystemTime:2022-01-26 01:00:41.690504922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 17:00:41.802197   23624 out.go:176] * Using the docker driver based on existing profile
	I0125 17:00:41.802226   23624 start.go:280] selected driver: docker
	I0125 17:00:41.802231   23624 start.go:795] validating driver "docker" against &{Name:cert-expiration-20220125165643-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:cert-expiration-20220125165643-11219 Namespace:default APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0125 17:00:41.802311   23624 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0125 17:00:41.804607   23624 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 17:00:41.959596   23624 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:54 SystemTime:2022-01-26 01:00:41.907817046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 17:00:41.959785   23624 cni.go:93] Creating CNI manager for ""
	I0125 17:00:41.959798   23624 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0125 17:00:41.959809   23624 start_flags.go:302] config:
	{Name:cert-expiration-20220125165643-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:cert-expiration-20220125165643-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0125 17:00:42.008504   23624 out.go:176] * Starting control plane node cert-expiration-20220125165643-11219 in cluster cert-expiration-20220125165643-11219
	I0125 17:00:42.008549   23624 cache.go:120] Beginning downloading kic base image for docker with docker
	I0125 17:00:42.034415   23624 out.go:176] * Pulling base image ...
	I0125 17:00:42.034442   23624 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0125 17:00:42.034468   23624 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0125 17:00:42.034484   23624 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0125 17:00:42.034495   23624 cache.go:57] Caching tarball of preloaded images
	I0125 17:00:42.034603   23624 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0125 17:00:42.034616   23624 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0125 17:00:42.035077   23624 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/config.json ...
	I0125 17:00:42.155151   23624 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0125 17:00:42.155160   23624 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0125 17:00:42.155168   23624 cache.go:208] Successfully downloaded all kic artifacts
	I0125 17:00:42.155248   23624 start.go:313] acquiring machines lock for cert-expiration-20220125165643-11219: {Name:mkcff6990e2e36dbe850ed8d48e7295ba41f400b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 17:00:42.155344   23624 start.go:317] acquired machines lock for "cert-expiration-20220125165643-11219" in 80.785µs
	I0125 17:00:42.155366   23624 start.go:93] Skipping create...Using existing machine configuration
	I0125 17:00:42.155374   23624 fix.go:55] fixHost starting: 
	I0125 17:00:42.155635   23624 cli_runner.go:133] Run: docker container inspect cert-expiration-20220125165643-11219 --format={{.State.Status}}
	I0125 17:00:42.265376   23624 fix.go:108] recreateIfNeeded on cert-expiration-20220125165643-11219: state=Running err=<nil>
	W0125 17:00:42.265401   23624 fix.go:134] unexpected machine state, will restart: <nil>
	I0125 17:00:42.292383   23624 out.go:176] * Updating the running docker "cert-expiration-20220125165643-11219" container ...
	I0125 17:00:42.292407   23624 machine.go:88] provisioning docker machine ...
	I0125 17:00:42.292429   23624 ubuntu.go:169] provisioning hostname "cert-expiration-20220125165643-11219"
	I0125 17:00:42.292515   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:42.401589   23624 main.go:130] libmachine: Using SSH client type: native
	I0125 17:00:42.401800   23624 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 63038 <nil> <nil>}
	I0125 17:00:42.401809   23624 main.go:130] libmachine: About to run SSH command:
	sudo hostname cert-expiration-20220125165643-11219 && echo "cert-expiration-20220125165643-11219" | sudo tee /etc/hostname
	I0125 17:00:42.552332   23624 main.go:130] libmachine: SSH cmd err, output: <nil>: cert-expiration-20220125165643-11219
	
	I0125 17:00:42.552448   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:42.663544   23624 main.go:130] libmachine: Using SSH client type: native
	I0125 17:00:42.663695   23624 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 63038 <nil> <nil>}
	I0125 17:00:42.663706   23624 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-20220125165643-11219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-20220125165643-11219/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-20220125165643-11219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0125 17:00:42.803282   23624 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0125 17:00:42.803296   23624 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube}
	I0125 17:00:42.803321   23624 ubuntu.go:177] setting up certificates
	I0125 17:00:42.803332   23624 provision.go:83] configureAuth start
	I0125 17:00:42.803408   23624 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-20220125165643-11219
	I0125 17:00:42.913008   23624 provision.go:138] copyHostCerts
	I0125 17:00:42.913103   23624 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem, removing ...
	I0125 17:00:42.913109   23624 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem
	I0125 17:00:42.913209   23624 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem (1123 bytes)
	I0125 17:00:42.913410   23624 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem, removing ...
	I0125 17:00:42.913417   23624 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem
	I0125 17:00:42.913476   23624 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem (1675 bytes)
	I0125 17:00:42.913643   23624 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem, removing ...
	I0125 17:00:42.913650   23624 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem
	I0125 17:00:42.913706   23624 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem (1082 bytes)
	I0125 17:00:42.913830   23624 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-20220125165643-11219 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube cert-expiration-20220125165643-11219]
	I0125 17:00:43.109650   23624 provision.go:172] copyRemoteCerts
	I0125 17:00:43.109723   23624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0125 17:00:43.109777   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:43.220338   23624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63038 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/cert-expiration-20220125165643-11219/id_rsa Username:docker}
	I0125 17:00:43.316332   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0125 17:00:43.341353   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0125 17:00:43.359502   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem --> /etc/docker/server.pem (1281 bytes)
	I0125 17:00:43.377984   23624 provision.go:86] duration metric: configureAuth took 574.639114ms
	I0125 17:00:43.377992   23624 ubuntu.go:193] setting minikube options for container-runtime
	I0125 17:00:43.378170   23624 config.go:176] Loaded profile config "cert-expiration-20220125165643-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 17:00:43.378232   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:43.488835   23624 main.go:130] libmachine: Using SSH client type: native
	I0125 17:00:43.488995   23624 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 63038 <nil> <nil>}
	I0125 17:00:43.489001   23624 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0125 17:00:43.627034   23624 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0125 17:00:43.627047   23624 ubuntu.go:71] root file system type: overlay
	I0125 17:00:43.627197   23624 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0125 17:00:43.627305   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:43.739200   23624 main.go:130] libmachine: Using SSH client type: native
	I0125 17:00:43.739383   23624 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 63038 <nil> <nil>}
	I0125 17:00:43.739428   23624 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0125 17:00:43.886709   23624 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0125 17:00:43.886827   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:44.000998   23624 main.go:130] libmachine: Using SSH client type: native
	I0125 17:00:44.001135   23624 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 63038 <nil> <nil>}
	I0125 17:00:44.001144   23624 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0125 17:00:44.144067   23624 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0125 17:00:44.144083   23624 machine.go:91] provisioned docker machine in 1.851666005s
	I0125 17:00:44.144089   23624 start.go:267] post-start starting for "cert-expiration-20220125165643-11219" (driver="docker")
	I0125 17:00:44.144092   23624 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0125 17:00:44.144171   23624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0125 17:00:44.144231   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:44.254547   23624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63038 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/cert-expiration-20220125165643-11219/id_rsa Username:docker}
	I0125 17:00:44.351219   23624 ssh_runner.go:195] Run: cat /etc/os-release
	I0125 17:00:44.355579   23624 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0125 17:00:44.355589   23624 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0125 17:00:44.355595   23624 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0125 17:00:44.355599   23624 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0125 17:00:44.355607   23624 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/addons for local assets ...
	I0125 17:00:44.355699   23624 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files for local assets ...
	I0125 17:00:44.355837   23624 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem -> 112192.pem in /etc/ssl/certs
	I0125 17:00:44.356001   23624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0125 17:00:44.363004   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem --> /etc/ssl/certs/112192.pem (1708 bytes)
	I0125 17:00:44.380318   23624 start.go:270] post-start completed in 236.213171ms
	I0125 17:00:44.380395   23624 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0125 17:00:44.380470   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:44.505441   23624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63038 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/cert-expiration-20220125165643-11219/id_rsa Username:docker}
	I0125 17:00:44.602133   23624 fix.go:57] fixHost completed within 2.446751847s
	I0125 17:00:44.602145   23624 start.go:80] releasing machines lock for "cert-expiration-20220125165643-11219", held for 2.446787295s
	I0125 17:00:44.602246   23624 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-20220125165643-11219
	I0125 17:00:44.712138   23624 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0125 17:00:44.712151   23624 ssh_runner.go:195] Run: systemctl --version
	I0125 17:00:44.712215   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:44.712220   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:44.835638   23624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63038 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/cert-expiration-20220125165643-11219/id_rsa Username:docker}
	I0125 17:00:44.835657   23624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63038 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/cert-expiration-20220125165643-11219/id_rsa Username:docker}
	I0125 17:00:45.115041   23624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0125 17:00:45.126568   23624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0125 17:00:45.136895   23624 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0125 17:00:45.136953   23624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0125 17:00:45.148817   23624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0125 17:00:45.165199   23624 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0125 17:00:45.250054   23624 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0125 17:00:45.336019   23624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0125 17:00:45.347763   23624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0125 17:00:45.427319   23624 ssh_runner.go:195] Run: sudo systemctl start docker
	I0125 17:00:45.438065   23624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0125 17:00:45.477954   23624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0125 17:00:45.545972   23624 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0125 17:00:45.546076   23624 cli_runner.go:133] Run: docker exec -t cert-expiration-20220125165643-11219 dig +short host.docker.internal
	I0125 17:00:45.726090   23624 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0125 17:00:45.726259   23624 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0125 17:00:45.731610   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:45.870944   23624 out.go:176]   - kubelet.housekeeping-interval=5m
	I0125 17:00:45.871016   23624 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0125 17:00:45.871086   23624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0125 17:00:45.903853   23624 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0125 17:00:45.903867   23624 docker.go:537] Images already preloaded, skipping extraction
	I0125 17:00:45.903980   23624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0125 17:00:45.936109   23624 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0125 17:00:45.936127   23624 cache_images.go:84] Images are preloaded, skipping loading
	I0125 17:00:45.936230   23624 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0125 17:00:46.019765   23624 cni.go:93] Creating CNI manager for ""
	I0125 17:00:46.019772   23624 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0125 17:00:46.019786   23624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0125 17:00:46.019797   23624 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-20220125165643-11219 NodeName:cert-expiration-20220125165643-11219 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs Clie
ntCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0125 17:00:46.019899   23624 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "cert-expiration-20220125165643-11219"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0125 17:00:46.019991   23624 kubeadm.go:791] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=cert-expiration-20220125165643-11219 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:cert-expiration-20220125165643-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0125 17:00:46.020047   23624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0125 17:00:46.028044   23624 binaries.go:44] Found k8s binaries, skipping transfer
	I0125 17:00:46.028100   23624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0125 17:00:46.035577   23624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I0125 17:00:46.049086   23624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0125 17:00:46.062404   23624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2058 bytes)
	I0125 17:00:46.076597   23624 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0125 17:00:46.080908   23624 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219 for IP: 192.168.49.2
	I0125 17:00:46.081030   23624 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.key
	I0125 17:00:46.081093   23624 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.key
	W0125 17:00:46.081249   23624 out.go:241] ! Certificate client.crt has expired. Generating a new one...
	I0125 17:00:46.081261   23624 certs.go:527] cert expired /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/client.crt: expiration: 2022-01-26 01:00:22 +0000 UTC, now: 2022-01-25 17:00:46.081256 -0800 PST m=+4.854993853
	I0125 17:00:46.082272   23624 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/client.key
	I0125 17:00:46.082304   23624 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/client.crt with IP's: []
	I0125 17:00:46.160624   23624 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/client.crt ...
	I0125 17:00:46.160634   23624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/client.crt: {Name:mkf56486e3207f5dae0eaa316f44671db292df24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:00:46.160918   23624 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/client.key ...
	I0125 17:00:46.160923   23624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/client.key: {Name:mk50f670a9682685c9ae7768ad918f349a471ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0125 17:00:46.161215   23624 out.go:241] ! Certificate apiserver.crt.dd3b5fb2 has expired. Generating a new one...
	I0125 17:00:46.161226   23624 certs.go:527] cert expired /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.crt.dd3b5fb2: expiration: 2022-01-26 01:00:22 +0000 UTC, now: 2022-01-25 17:00:46.161223 -0800 PST m=+4.934960238
	I0125 17:00:46.161664   23624 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.key.dd3b5fb2
	I0125 17:00:46.161698   23624 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0125 17:00:46.251058   23624 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.crt.dd3b5fb2 ...
	I0125 17:00:46.257883   23624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.crt.dd3b5fb2: {Name:mk4dd6159acedf5fe1ba8557afd9d1f851f7d326 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:00:46.258432   23624 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.key.dd3b5fb2 ...
	I0125 17:00:46.258451   23624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.key.dd3b5fb2: {Name:mk7aedf3341874036da93efe16748870ac9dd4ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:00:46.258762   23624 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.crt
	I0125 17:00:46.259167   23624 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.key
	W0125 17:00:46.259685   23624 out.go:241] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0125 17:00:46.259697   23624 certs.go:527] cert expired /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/proxy-client.crt: expiration: 2022-01-26 01:00:22 +0000 UTC, now: 2022-01-25 17:00:46.259694 -0800 PST m=+5.033430505
	I0125 17:00:46.260071   23624 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/proxy-client.key
	I0125 17:00:46.260099   23624 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/proxy-client.crt with IP's: []
	I0125 17:00:46.370857   23624 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/proxy-client.crt ...
	I0125 17:00:46.370867   23624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/proxy-client.crt: {Name:mk46c2ea6c112981c32d0c93bb9d4bb550685a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:00:46.371117   23624 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/proxy-client.key ...
	I0125 17:00:46.371122   23624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/proxy-client.key: {Name:mkb33b86d1578a5dfe6c4839903b7879251b67c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:00:46.371506   23624 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219.pem (1338 bytes)
	W0125 17:00:46.371550   23624 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219_empty.pem, impossibly tiny 0 bytes
	I0125 17:00:46.371563   23624 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem (1675 bytes)
	I0125 17:00:46.371615   23624 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem (1082 bytes)
	I0125 17:00:46.371656   23624 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem (1123 bytes)
	I0125 17:00:46.371692   23624 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem (1675 bytes)
	I0125 17:00:46.371763   23624 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem (1708 bytes)
	I0125 17:00:46.372667   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0125 17:00:46.391324   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0125 17:00:46.411084   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0125 17:00:46.429817   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cert-expiration-20220125165643-11219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0125 17:00:46.448180   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0125 17:00:46.468176   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0125 17:00:46.487299   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0125 17:00:46.505616   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0125 17:00:46.523333   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0125 17:00:46.544384   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219.pem --> /usr/share/ca-certificates/11219.pem (1338 bytes)
	I0125 17:00:46.565218   23624 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem --> /usr/share/ca-certificates/112192.pem (1708 bytes)
	I0125 17:00:46.583940   23624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0125 17:00:46.597719   23624 ssh_runner.go:195] Run: openssl version
	I0125 17:00:46.603632   23624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0125 17:00:46.611866   23624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0125 17:00:46.616364   23624 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 26 00:00 /usr/share/ca-certificates/minikubeCA.pem
	I0125 17:00:46.616427   23624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0125 17:00:46.622901   23624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0125 17:00:46.634344   23624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11219.pem && ln -fs /usr/share/ca-certificates/11219.pem /etc/ssl/certs/11219.pem"
	I0125 17:00:46.643209   23624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11219.pem
	I0125 17:00:46.649697   23624 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 26 00:05 /usr/share/ca-certificates/11219.pem
	I0125 17:00:46.649785   23624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11219.pem
	I0125 17:00:46.659288   23624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11219.pem /etc/ssl/certs/51391683.0"
	I0125 17:00:46.667980   23624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112192.pem && ln -fs /usr/share/ca-certificates/112192.pem /etc/ssl/certs/112192.pem"
	I0125 17:00:46.676162   23624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112192.pem
	I0125 17:00:46.680658   23624 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 26 00:05 /usr/share/ca-certificates/112192.pem
	I0125 17:00:46.680702   23624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112192.pem
	I0125 17:00:46.686230   23624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112192.pem /etc/ssl/certs/3ec20f2e.0"
	I0125 17:00:46.693675   23624 kubeadm.go:388] StartCluster: {Name:cert-expiration-20220125165643-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:cert-expiration-20220125165643-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0125 17:00:46.693792   23624 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0125 17:00:46.723762   23624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0125 17:00:46.732165   23624 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0125 17:00:46.739310   23624 kubeadm.go:124] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0125 17:00:46.739397   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:46.849893   23624 kubeconfig.go:92] found "cert-expiration-20220125165643-11219" server: "https://127.0.0.1:63044"
	I0125 17:00:46.853129   23624 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0125 17:00:46.861012   23624 api_server.go:165] Checking apiserver status ...
	I0125 17:00:46.861074   23624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0125 17:00:46.876147   23624 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1744/cgroup
	I0125 17:00:46.885796   23624 api_server.go:181] apiserver freezer: "7:freezer:/docker/bb5df1e96dd47b5f524dfc3e42aaa7bf28e08c44f352db0a7d7ebf08a8f69b11/kubepods/burstable/podbdd84c8cc7fb865ea0d30572548ee059/27c59181832a2a6118c0f0b4a613ccac0e4a0f3ff93c40320fbf303410151cea"
	I0125 17:00:46.885871   23624 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bb5df1e96dd47b5f524dfc3e42aaa7bf28e08c44f352db0a7d7ebf08a8f69b11/kubepods/burstable/podbdd84c8cc7fb865ea0d30572548ee059/27c59181832a2a6118c0f0b4a613ccac0e4a0f3ff93c40320fbf303410151cea/freezer.state
	I0125 17:00:46.893402   23624 api_server.go:203] freezer state: "THAWED"
	I0125 17:00:46.893411   23624 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63044/healthz ...
	I0125 17:00:46.898718   23624 api_server.go:266] https://127.0.0.1:63044/healthz returned 200:
	ok
	I0125 17:00:46.913094   23624 system_pods.go:86] 7 kube-system pods found
	I0125 17:00:46.913106   23624 system_pods.go:89] "coredns-64897985d-jx6gn" [cd4ffadb-f863-4d91-a8dd-b7881822a736] Running
	I0125 17:00:46.913109   23624 system_pods.go:89] "etcd-cert-expiration-20220125165643-11219" [e8a23123-0d7a-4df4-892e-a7e07c746479] Running
	I0125 17:00:46.913111   23624 system_pods.go:89] "kube-apiserver-cert-expiration-20220125165643-11219" [c14fe708-c081-4c6c-9cba-1295da5d7d7c] Running
	I0125 17:00:46.913113   23624 system_pods.go:89] "kube-controller-manager-cert-expiration-20220125165643-11219" [86cff5e8-c128-4896-9188-65388b705c85] Running
	I0125 17:00:46.913119   23624 system_pods.go:89] "kube-proxy-4298k" [17149df9-d8b5-4313-8170-b03e1530ef38] Running
	I0125 17:00:46.913121   23624 system_pods.go:89] "kube-scheduler-cert-expiration-20220125165643-11219" [45e20e4f-4df6-428d-b7bc-b56b35d6465f] Running
	I0125 17:00:46.913123   23624 system_pods.go:89] "storage-provisioner" [791f59fc-d44f-4f97-bec9-db05e5ef9cc4] Running
	I0125 17:00:46.914784   23624 api_server.go:140] control plane version: v1.23.2
	I0125 17:00:46.914791   23624 kubeadm.go:618] The running cluster does not require reconfiguration: 127.0.0.1
	I0125 17:00:46.914794   23624 kubeadm.go:390] StartCluster complete in 221.124537ms
	I0125 17:00:46.914804   23624 settings.go:142] acquiring lock: {Name:mk4b38f66d2c1d7ad910ce332a6e0f9663533ce8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:00:46.914887   23624 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 17:00:46.915493   23624 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig: {Name:mk22ac11166e634b93c7a48f1f20a682ee77d8e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:00:46.921013   23624 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cert-expiration-20220125165643-11219" rescaled to 1
	I0125 17:00:46.921047   23624 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}
	I0125 17:00:46.921065   23624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0125 17:00:46.973100   23624 out.go:176] * Verifying Kubernetes components...
	I0125 17:00:46.921099   23624 addons.go:415] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0125 17:00:46.921234   23624 config.go:176] Loaded profile config "cert-expiration-20220125165643-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 17:00:46.973209   23624 addons.go:65] Setting default-storageclass=true in profile "cert-expiration-20220125165643-11219"
	I0125 17:00:46.973242   23624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-20220125165643-11219"
	I0125 17:00:46.973248   23624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0125 17:00:46.973381   23624 addons.go:65] Setting storage-provisioner=true in profile "cert-expiration-20220125165643-11219"
	I0125 17:00:46.973477   23624 addons.go:153] Setting addon storage-provisioner=true in "cert-expiration-20220125165643-11219"
	W0125 17:00:46.973498   23624 addons.go:165] addon storage-provisioner should already be in state true
	I0125 17:00:46.973582   23624 host.go:66] Checking if "cert-expiration-20220125165643-11219" exists ...
	I0125 17:00:46.973971   23624 cli_runner.go:133] Run: docker container inspect cert-expiration-20220125165643-11219 --format={{.State.Status}}
	I0125 17:00:46.975441   23624 cli_runner.go:133] Run: docker container inspect cert-expiration-20220125165643-11219 --format={{.State.Status}}
	I0125 17:00:46.991247   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:47.059444   23624 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0125 17:00:47.148346   23624 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0125 17:00:47.146815   23624 addons.go:153] Setting addon default-storageclass=true in "cert-expiration-20220125165643-11219"
	W0125 17:00:47.148376   23624 addons.go:165] addon default-storageclass should already be in state true
	I0125 17:00:47.148406   23624 host.go:66] Checking if "cert-expiration-20220125165643-11219" exists ...
	I0125 17:00:47.148479   23624 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0125 17:00:47.148485   23624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0125 17:00:47.148573   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:47.150472   23624 cli_runner.go:133] Run: docker container inspect cert-expiration-20220125165643-11219 --format={{.State.Status}}
	I0125 17:00:47.153484   23624 api_server.go:51] waiting for apiserver process to appear ...
	I0125 17:00:47.153544   23624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0125 17:00:47.178001   23624 api_server.go:71] duration metric: took 256.932706ms to wait for apiserver process to appear ...
	I0125 17:00:47.178019   23624 api_server.go:87] waiting for apiserver healthz status ...
	I0125 17:00:47.178027   23624 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:63044/healthz ...
	I0125 17:00:47.186372   23624 api_server.go:266] https://127.0.0.1:63044/healthz returned 200:
	ok
	I0125 17:00:47.187990   23624 api_server.go:140] control plane version: v1.23.2
	I0125 17:00:47.187998   23624 api_server.go:130] duration metric: took 9.975975ms to wait for apiserver health ...
	I0125 17:00:47.188011   23624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0125 17:00:47.194331   23624 system_pods.go:59] 7 kube-system pods found
	I0125 17:00:47.194340   23624 system_pods.go:61] "coredns-64897985d-jx6gn" [cd4ffadb-f863-4d91-a8dd-b7881822a736] Running
	I0125 17:00:47.194369   23624 system_pods.go:61] "etcd-cert-expiration-20220125165643-11219" [e8a23123-0d7a-4df4-892e-a7e07c746479] Running
	I0125 17:00:47.194371   23624 system_pods.go:61] "kube-apiserver-cert-expiration-20220125165643-11219" [c14fe708-c081-4c6c-9cba-1295da5d7d7c] Running
	I0125 17:00:47.194373   23624 system_pods.go:61] "kube-controller-manager-cert-expiration-20220125165643-11219" [86cff5e8-c128-4896-9188-65388b705c85] Running
	I0125 17:00:47.194375   23624 system_pods.go:61] "kube-proxy-4298k" [17149df9-d8b5-4313-8170-b03e1530ef38] Running
	I0125 17:00:47.194391   23624 system_pods.go:61] "kube-scheduler-cert-expiration-20220125165643-11219" [45e20e4f-4df6-428d-b7bc-b56b35d6465f] Running
	I0125 17:00:47.194393   23624 system_pods.go:61] "storage-provisioner" [791f59fc-d44f-4f97-bec9-db05e5ef9cc4] Running
	I0125 17:00:47.194395   23624 system_pods.go:74] duration metric: took 6.38149ms to wait for pod list to return data ...
	I0125 17:00:47.194399   23624 kubeadm.go:542] duration metric: took 273.341132ms to wait for : map[apiserver:true system_pods:true] ...
	I0125 17:00:47.194406   23624 node_conditions.go:102] verifying NodePressure condition ...
	I0125 17:00:47.198880   23624 node_conditions.go:122] node storage ephemeral capacity is 61255492Ki
	I0125 17:00:47.198893   23624 node_conditions.go:123] node cpu capacity is 6
	I0125 17:00:47.198910   23624 node_conditions.go:105] duration metric: took 4.498804ms to run NodePressure ...
	I0125 17:00:47.198915   23624 start.go:213] waiting for startup goroutines ...
	I0125 17:00:47.277455   23624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63038 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/cert-expiration-20220125165643-11219/id_rsa Username:docker}
	I0125 17:00:47.277506   23624 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0125 17:00:47.277511   23624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0125 17:00:47.278074   23624 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-20220125165643-11219
	I0125 17:00:47.383949   23624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0125 17:00:47.392758   23624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63038 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/cert-expiration-20220125165643-11219/id_rsa Username:docker}
	I0125 17:00:47.502263   23624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0125 17:00:47.676204   23624 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0125 17:00:47.676243   23624 addons.go:417] enableAddons completed in 755.167814ms
	I0125 17:00:47.717437   23624 start.go:493] kubectl: 1.19.7, cluster: 1.23.2 (minor skew: 4)
	I0125 17:00:47.743386   23624 out.go:176] 
	W0125 17:00:47.743531   23624 out.go:241] ! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.23.2.
	I0125 17:00:47.769378   23624 out.go:176]   - Want kubectl v1.23.2? Try 'minikube kubectl -- get pods -A'
	I0125 17:00:47.821304   23624 out.go:176] * Done! kubectl is now configured to use "cert-expiration-20220125165643-11219" cluster and "default" namespace by default
	I0125 17:00:50.641429   23465 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (54.722817306s)
	I0125 17:00:50.641499   23465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0125 17:00:50.651861   23465 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0125 17:00:50.660553   23465 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0125 17:00:50.660609   23465 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0125 17:00:50.668159   23465 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0125 17:00:50.668189   23465 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	W0125 17:00:50.937719   23465 out.go:241] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0126 01:00:50.719601    5657 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	I0125 17:00:50.937745   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0125 17:00:51.024420   23465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0125 17:00:51.034204   23465 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0125 17:00:51.034261   23465 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0125 17:00:51.041804   23465 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0125 17:00:51.041829   23465 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0125 17:00:51.315365   23465 kubeadm.go:390] StartCluster complete in 55.573352274s
	I0125 17:00:51.315476   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0125 17:00:51.354845   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.354857   23465 logs.go:276] No container was found matching "kube-apiserver"
	I0125 17:00:51.354966   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0125 17:00:51.392838   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.392872   23465 logs.go:276] No container was found matching "etcd"
	I0125 17:00:51.392981   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0125 17:00:51.430867   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.430882   23465 logs.go:276] No container was found matching "coredns"
	I0125 17:00:51.430968   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0125 17:00:51.469319   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.469332   23465 logs.go:276] No container was found matching "kube-scheduler"
	I0125 17:00:51.469420   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0125 17:00:51.507689   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.507702   23465 logs.go:276] No container was found matching "kube-proxy"
	I0125 17:00:51.507810   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0125 17:00:51.546710   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.546736   23465 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0125 17:00:51.546862   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0125 17:00:51.584481   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.584494   23465 logs.go:276] No container was found matching "storage-provisioner"
	I0125 17:00:51.584627   23465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0125 17:00:51.622958   23465 logs.go:274] 0 containers: []
	W0125 17:00:51.622971   23465 logs.go:276] No container was found matching "kube-controller-manager"
	I0125 17:00:51.622978   23465 logs.go:123] Gathering logs for Docker ...
	I0125 17:00:51.622985   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0125 17:00:51.665886   23465 logs.go:123] Gathering logs for container status ...
	I0125 17:00:51.665902   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0125 17:00:53.756959   23465 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.091036951s)
	I0125 17:00:53.757083   23465 logs.go:123] Gathering logs for kubelet ...
	I0125 17:00:53.757091   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0125 17:00:53.820621   23465 logs.go:138] Found kubelet problem: Jan 26 00:59:51 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:51.753158    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	W0125 17:00:53.820847   23465 logs.go:138] Found kubelet problem: Jan 26 00:59:52 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:52.857684    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	W0125 17:00:53.821081   23465 logs.go:138] Found kubelet problem: Jan 26 00:59:52 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:52.866466    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.821403   23465 logs.go:138] Found kubelet problem: Jan 26 00:59:53 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:53.876628    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.821628   23465 logs.go:138] Found kubelet problem: Jan 26 00:59:54 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:54.880413    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.822110   23465 logs.go:138] Found kubelet problem: Jan 26 00:59:56 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:56.958614    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	W0125 17:00:53.822345   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:05 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:05.376602    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.822569   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:08 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:08.376394    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	W0125 17:00:53.823258   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:21 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:21.016916    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.824809   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:24 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:24.055560    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 40s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	W0125 17:00:53.825035   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:24 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:24.549477    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.825252   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:26 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:26.936774    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 40s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	W0125 17:00:53.830196   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:38 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:38.354375    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	W0125 17:00:53.830414   23465 logs.go:138] Found kubelet problem: Jan 26 01:00:38 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:38.354933    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 40s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	I0125 17:00:53.830537   23465 logs.go:123] Gathering logs for dmesg ...
	I0125 17:00:53.830545   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0125 17:00:53.852696   23465 logs.go:123] Gathering logs for describe nodes ...
	I0125 17:00:53.878411   23465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0125 17:00:53.937511   23465 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0125 17:00:53.937537   23465 out.go:370] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0126 01:00:51.091843    5767 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	W0125 17:00:53.937548   23465 out.go:241] * 
	W0125 17:00:53.937639   23465 out.go:241] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0126 01:00:51.091843    5767 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0125 17:00:53.937654   23465 out.go:241] * 
	W0125 17:00:53.938236   23465 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0125 17:00:54.010946   23465 out.go:176] X Problems detected in kubelet:
	I0125 17:00:54.057771   23465 out.go:176]   Jan 26 00:59:51 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:51.753158    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	I0125 17:00:54.130799   23465 out.go:176]   Jan 26 00:59:52 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:52.857684    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 20s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	I0125 17:00:54.176977   23465 out.go:176]   Jan 26 00:59:52 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 00:59:52.866466    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	I0125 17:00:54.222947   23465 out.go:176] 
	W0125 17:00:54.223098   23465 out.go:241] X Exiting due to GUEST_PORT_IN_USE: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.0
	[preflight] Running pre-flight checks
	
	stderr:
	W0126 01:00:51.091843    5767 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR Port-10257]: Port 10257 is in use
		[ERROR Port-2379]: Port 2379 is in use
		[ERROR Port-2380]: Port 2380 is in use
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	W0125 17:00:54.223229   23465 out.go:241] * Suggestion: kubeadm detected a TCP port conflict with another process: probably another local Kubernetes installation. Run lsof -p<port> to find the process and kill it
	W0125 17:00:54.223292   23465 out.go:241] * Related issue: https://github.com/kubernetes/minikube/issues/5484
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-01-26 00:58:05 UTC, end at Wed 2022-01-26 01:00:55 UTC. --
	Jan 26 01:00:23 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:23.497643464Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:23 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:23.497726912Z" level=warning msg="4e9a9ef2a4af0d6c962b502a3dd777b2c73f7975f5b59dba412dff61c84c3e3b cleanup: failed to unmount IPC: umount /var/lib/docker/containers/4e9a9ef2a4af0d6c962b502a3dd777b2c73f7975f5b59dba412dff61c84c3e3b/mounts/shm, flags: 0x2: no such file or directory"
	Jan 26 01:00:48 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:48.579697488Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:48 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:48.579955165Z" level=warning msg="2cfc9fcef78224c354beb6f4797907504a509effb60b731c00aaaf4be8116f10 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/2cfc9fcef78224c354beb6f4797907504a509effb60b731c00aaaf4be8116f10/mounts/shm, flags: 0x2: no such file or directory"
	Jan 26 01:00:48 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:48.771607357Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:48 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:48.912600067Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:48 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:48.912638556Z" level=warning msg="99c9f756d21c88c043b342559a50c33ce600c2b39874f378510880da4eaa5076 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/99c9f756d21c88c043b342559a50c33ce600c2b39874f378510880da4eaa5076/mounts/shm, flags: 0x2: no such file or directory"
	Jan 26 01:00:49 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:49.012393662Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:49 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:49.165202999Z" level=warning msg="e0802d6043028a57c7fc42b75bf2df856e09a66763bb8d5d17c7dd4cd496e2e1 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/e0802d6043028a57c7fc42b75bf2df856e09a66763bb8d5d17c7dd4cd496e2e1/mounts/shm, flags: 0x2: no such file or directory"
	Jan 26 01:00:49 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:49.165451926Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:49 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:49.264529096Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:49 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:49.264731604Z" level=warning msg="380f16740b9bc029531e641e7a336621782b9df0e9e9ef6b847d70828cf54d14 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/380f16740b9bc029531e641e7a336621782b9df0e9e9ef6b847d70828cf54d14/mounts/shm, flags: 0x2: no such file or directory"
	Jan 26 01:00:49 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:49.371007578Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:49 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:49.371032409Z" level=warning msg="3b012f79e3452753895e4fe93f6cdaa2a31373e49227190412d24f1d85dda7b7 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/3b012f79e3452753895e4fe93f6cdaa2a31373e49227190412d24f1d85dda7b7/mounts/shm, flags: 0x2: no such file or directory"
	Jan 26 01:00:49 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:49.497930233Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:49 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:49.608955744Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:49 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:49.720992743Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:49 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:49.963962626Z" level=warning msg="a772c9784cd2a7c153968660b0ab3a9b0b6ed776ea239ed0b183967350e18e39 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/a772c9784cd2a7c153968660b0ab3a9b0b6ed776ea239ed0b183967350e18e39/mounts/shm, flags: 0x2: no such file or directory"
	Jan 26 01:00:49 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:49.963965567Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:50 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:50.071654920Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:50 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:50.071701990Z" level=warning msg="d6efdfb3b809e385f0453fa38b991af3c652bfcb82703973b43b4b81d3e14ae2 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d6efdfb3b809e385f0453fa38b991af3c652bfcb82703973b43b4b81d3e14ae2/mounts/shm, flags: 0x2: no such file or directory"
	Jan 26 01:00:50 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:50.166202335Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:50 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:50.266842776Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:50 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:50.366757715Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 26 01:00:50 running-upgrade-20220125165756-11219 dockerd[3262]: time="2022-01-26T01:00:50.466711903Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
	time="2022-01-26T01:00:58Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.025594] bpfilter: read fail 0
	[  +0.025486] bpfilter: read fail 0
	[  +0.024757] bpfilter: write fail -32
	[  +0.027378] bpfilter: read fail 0
	[  +0.030685] bpfilter: write fail -32
	[  +0.030582] bpfilter: write fail -32
	[  +0.036799] bpfilter: write fail -32
	[  +0.032395] bpfilter: write fail -32
	[  +0.042770] bpfilter: read fail 0
	[  +0.033677] bpfilter: read fail 0
	[  +0.038123] bpfilter: read fail 0
	[  +0.031321] bpfilter: read fail 0
	[  +0.027903] bpfilter: read fail 0
	[  +0.027973] bpfilter: write fail -32
	[  +0.042427] bpfilter: read fail 0
	[  +0.032010] bpfilter: read fail 0
	[  +0.038066] bpfilter: write fail -32
	[  +0.029864] bpfilter: read fail 0
	[  +0.024991] bpfilter: read fail 0
	[  +0.032645] bpfilter: write fail -32
	[  +0.028661] bpfilter: write fail -32
	[  +0.032185] bpfilter: write fail -32
	[  +0.032074] bpfilter: write fail -32
	[  +0.056580] bpfilter: read fail 0
	[  +0.030000] bpfilter: write fail -32
	
	* 
	* ==> kernel <==
	*  01:00:58 up  1:02,  0 users,  load average: 2.75, 2.39, 2.05
	Linux running-upgrade-20220125165756-11219 5.10.25-linuxkit #1 SMP Tue Mar 23 09:27:39 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 19.10"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-01-26 00:58:05 UTC, end at Wed 2022-01-26 01:00:58 UTC. --
	Jan 26 01:00:31 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:31.097993    2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/8fec8b0b-e488-4e14-8044-75910281ea77-lib-modules") pod "kube-proxy-p7l72" (UID: "8fec8b0b-e488-4e14-8044-75910281ea77")
	Jan 26 01:00:31 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:31.098006    2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-ftdpg" (UniqueName: "kubernetes.io/secret/426146ee-7a16-4b2d-b789-ba07ff3615b3-kindnet-token-ftdpg") pod "kindnet-dwtd6" (UID: "426146ee-7a16-4b2d-b789-ba07ff3615b3")
	Jan 26 01:00:31 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:31.098017    2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/8fec8b0b-e488-4e14-8044-75910281ea77-xtables-lock") pod "kube-proxy-p7l72" (UID: "8fec8b0b-e488-4e14-8044-75910281ea77")
	Jan 26 01:00:31 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:31.098028    2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/426146ee-7a16-4b2d-b789-ba07ff3615b3-lib-modules") pod "kindnet-dwtd6" (UID: "426146ee-7a16-4b2d-b789-ba07ff3615b3")
	Jan 26 01:00:31 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:31.098040    2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/426146ee-7a16-4b2d-b789-ba07ff3615b3-xtables-lock") pod "kindnet-dwtd6" (UID: "426146ee-7a16-4b2d-b789-ba07ff3615b3")
	Jan 26 01:00:31 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:31.098053    2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-w6smm" (UniqueName: "kubernetes.io/secret/8fec8b0b-e488-4e14-8044-75910281ea77-kube-proxy-token-w6smm") pod "kube-proxy-p7l72" (UID: "8fec8b0b-e488-4e14-8044-75910281ea77")
	Jan 26 01:00:31 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:31.098064    2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/426146ee-7a16-4b2d-b789-ba07ff3615b3-cni-cfg") pod "kindnet-dwtd6" (UID: "426146ee-7a16-4b2d-b789-ba07ff3615b3")
	Jan 26 01:00:31 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:31.965840    2418 topology_manager.go:233] [topologymanager] Topology Admit Handler
	Jan 26 01:00:32 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:32.118835    2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-ksrs2" (UniqueName: "kubernetes.io/secret/27474215-7975-4b64-abcb-44e61a70900a-coredns-token-ksrs2") pod "coredns-66bff467f8-rrpxl" (UID: "27474215-7975-4b64-abcb-44e61a70900a")
	Jan 26 01:00:32 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:32.118900    2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/27474215-7975-4b64-abcb-44e61a70900a-config-volume") pod "coredns-66bff467f8-rrpxl" (UID: "27474215-7975-4b64-abcb-44e61a70900a")
	Jan 26 01:00:32 running-upgrade-20220125165756-11219 kubelet[2418]: W0126 01:00:32.490098    2418 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-rrpxl through plugin: invalid network status for
	Jan 26 01:00:32 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:32.966182    2418 topology_manager.go:233] [topologymanager] Topology Admit Handler
	Jan 26 01:00:33 running-upgrade-20220125165756-11219 kubelet[2418]: W0126 01:00:33.117643    2418 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-rrpxl through plugin: invalid network status for
	Jan 26 01:00:33 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:33.123686    2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5ff4e187-5fd2-4968-ab36-0271318a1bb4-config-volume") pod "coredns-66bff467f8-jzk6v" (UID: "5ff4e187-5fd2-4968-ab36-0271318a1bb4")
	Jan 26 01:00:33 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:33.123762    2418 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-ksrs2" (UniqueName: "kubernetes.io/secret/5ff4e187-5fd2-4968-ab36-0271318a1bb4-coredns-token-ksrs2") pod "coredns-66bff467f8-jzk6v" (UID: "5ff4e187-5fd2-4968-ab36-0271318a1bb4")
	Jan 26 01:00:33 running-upgrade-20220125165756-11219 kubelet[2418]: W0126 01:00:33.487287    2418 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-jzk6v through plugin: invalid network status for
	Jan 26 01:00:33 running-upgrade-20220125165756-11219 kubelet[2418]: W0126 01:00:33.790372    2418 container.go:526] Failed to update stats for container "/kubepods/besteffort/podcf0d63f18224a60f2c30a1e2114254d3/8570451afa6ce6511c10c3198b23a6b9cadacb0602bd74e742292a8628c592ab": unable to determine device info for dir: /var/lib/docker/overlay2/88567d81fd5e7d28f1885351278dc93923b093bf5f2485d17d4402f408a4eb40/diff: stat failed on /var/lib/docker/overlay2/88567d81fd5e7d28f1885351278dc93923b093bf5f2485d17d4402f408a4eb40/diff with error: no such file or directory, continuing to push stats
	Jan 26 01:00:34 running-upgrade-20220125165756-11219 kubelet[2418]: W0126 01:00:34.127631    2418 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-jzk6v through plugin: invalid network status for
	Jan 26 01:00:38 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:38.354045    2418 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 215d4e80a4b88ccb7aa44087ad6e4f718e7618f41dee527b3c16615a567f0d5d
	Jan 26 01:00:38 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:38.354375    2418 pod_workers.go:191] Error syncing pod c92479a2ea69d7c331c16a5105dd1b8c ("kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-20220125165756-11219_kube-system(c92479a2ea69d7c331c16a5105dd1b8c)"
	Jan 26 01:00:38 running-upgrade-20220125165756-11219 kubelet[2418]: I0126 01:00:38.354693    2418 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 4e9a9ef2a4af0d6c962b502a3dd777b2c73f7975f5b59dba412dff61c84c3e3b
	Jan 26 01:00:38 running-upgrade-20220125165756-11219 kubelet[2418]: E0126 01:00:38.354933    2418 pod_workers.go:191] Error syncing pod cf0d63f18224a60f2c30a1e2114254d3 ("etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "back-off 40s restarting failed container=etcd pod=etcd-running-upgrade-20220125165756-11219_kube-system(cf0d63f18224a60f2c30a1e2114254d3)"
	Jan 26 01:00:48 running-upgrade-20220125165756-11219 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Jan 26 01:00:48 running-upgrade-20220125165756-11219 systemd[1]: kubelet.service: Succeeded.
	Jan 26 01:00:48 running-upgrade-20220125165756-11219 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0125 17:00:58.290662   23798 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p running-upgrade-20220125165756-11219 -n running-upgrade-20220125165756-11219
helpers_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p running-upgrade-20220125165756-11219 -n running-upgrade-20220125165756-11219: exit status 2 (636.534432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:255: status error: exit status 2 (may be ok)
helpers_test.go:257: "running-upgrade-20220125165756-11219" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "running-upgrade-20220125165756-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-20220125165756-11219

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-20220125165756-11219: (6.620685639s)
--- FAIL: TestRunningBinaryUpgrade (189.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (544.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-20220125165335-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-20220125165335-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : exit status 80 (9m4.584758022s)

                                                
                                                
-- stdout --
	* [calico-20220125165335-11219] minikube v1.25.1 on Darwin 11.1
	  - MINIKUBE_LOCATION=13326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node calico-20220125165335-11219 in cluster calico-20220125165335-11219
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0125 17:11:17.678798   28290 out.go:297] Setting OutFile to fd 1 ...
	I0125 17:11:17.678940   28290 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 17:11:17.678945   28290 out.go:310] Setting ErrFile to fd 2...
	I0125 17:11:17.678948   28290 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 17:11:17.679020   28290 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/bin
	I0125 17:11:17.679367   28290 out.go:304] Setting JSON to false
	I0125 17:11:17.705655   28290 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":9652,"bootTime":1643149825,"procs":323,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0125 17:11:17.705760   28290 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0125 17:11:17.732793   28290 out.go:176] * [calico-20220125165335-11219] minikube v1.25.1 on Darwin 11.1
	I0125 17:11:17.733008   28290 notify.go:174] Checking for updates...
	I0125 17:11:17.782618   28290 out.go:176]   - MINIKUBE_LOCATION=13326
	I0125 17:11:17.808511   28290 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 17:11:17.834428   28290 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0125 17:11:17.860640   28290 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0125 17:11:17.886379   28290 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	I0125 17:11:17.886791   28290 config.go:176] Loaded profile config "cilium-20220125165335-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 17:11:17.886837   28290 driver.go:344] Setting default libvirt URI to qemu:///system
	I0125 17:11:17.981594   28290 docker.go:132] docker version: linux-20.10.5
	I0125 17:11:17.981718   28290 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 17:11:18.140587   28290 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-26 01:11:18.087706938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 17:11:18.167207   28290 out.go:176] * Using the docker driver based on user configuration
	I0125 17:11:18.167240   28290 start.go:280] selected driver: docker
	I0125 17:11:18.167253   28290 start.go:795] validating driver "docker" against <nil>
	I0125 17:11:18.167278   28290 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0125 17:11:18.170961   28290 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 17:11:18.326113   28290 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-26 01:11:18.275891366 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 17:11:18.326230   28290 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0125 17:11:18.326355   28290 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0125 17:11:18.326374   28290 start_flags.go:828] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0125 17:11:18.326389   28290 cni.go:93] Creating CNI manager for "calico"
	I0125 17:11:18.326395   28290 start_flags.go:297] Found "Calico" CNI - setting NetworkPlugin=cni
	I0125 17:11:18.326403   28290 start_flags.go:302] config:
	{Name:calico-20220125165335-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:calico-20220125165335-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0125 17:11:18.376220   28290 out.go:176] * Starting control plane node calico-20220125165335-11219 in cluster calico-20220125165335-11219
	I0125 17:11:18.376275   28290 cache.go:120] Beginning downloading kic base image for docker with docker
	I0125 17:11:18.402265   28290 out.go:176] * Pulling base image ...
	I0125 17:11:18.402311   28290 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0125 17:11:18.402344   28290 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0125 17:11:18.402365   28290 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0125 17:11:18.402381   28290 cache.go:57] Caching tarball of preloaded images
	I0125 17:11:18.402496   28290 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0125 17:11:18.402537   28290 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0125 17:11:18.403178   28290 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/config.json ...
	I0125 17:11:18.403293   28290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/config.json: {Name:mk1c50ca602e8e192dc075e4928119b10174ff51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:11:18.528389   28290 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0125 17:11:18.528420   28290 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0125 17:11:18.528430   28290 cache.go:208] Successfully downloaded all kic artifacts
	I0125 17:11:18.528482   28290 start.go:313] acquiring machines lock for calico-20220125165335-11219: {Name:mk4b87cc260ad0c8ceaf5d8d8abe5453d83e084c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 17:11:18.529363   28290 start.go:317] acquired machines lock for "calico-20220125165335-11219" in 869.205µs
	I0125 17:11:18.529406   28290 start.go:89] Provisioning new machine with config: &{Name:calico-20220125165335-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:calico-20220125165335-11219 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: I
P: Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}
	I0125 17:11:18.529493   28290 start.go:126] createHost starting for "" (driver="docker")
	I0125 17:11:18.615826   28290 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0125 17:11:18.616240   28290 start.go:160] libmachine.API.Create for "calico-20220125165335-11219" (driver="docker")
	I0125 17:11:18.616300   28290 client.go:168] LocalClient.Create starting
	I0125 17:11:18.616498   28290 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem
	I0125 17:11:18.616585   28290 main.go:130] libmachine: Decoding PEM data...
	I0125 17:11:18.616616   28290 main.go:130] libmachine: Parsing certificate...
	I0125 17:11:18.616745   28290 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem
	I0125 17:11:18.616799   28290 main.go:130] libmachine: Decoding PEM data...
	I0125 17:11:18.616817   28290 main.go:130] libmachine: Parsing certificate...
	I0125 17:11:18.617655   28290 cli_runner.go:133] Run: docker network inspect calico-20220125165335-11219 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0125 17:11:18.724982   28290 cli_runner.go:180] docker network inspect calico-20220125165335-11219 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0125 17:11:18.725088   28290 network_create.go:254] running [docker network inspect calico-20220125165335-11219] to gather additional debugging logs...
	I0125 17:11:18.725108   28290 cli_runner.go:133] Run: docker network inspect calico-20220125165335-11219
	W0125 17:11:18.828990   28290 cli_runner.go:180] docker network inspect calico-20220125165335-11219 returned with exit code 1
	I0125 17:11:18.829015   28290 network_create.go:257] error running [docker network inspect calico-20220125165335-11219]: docker network inspect calico-20220125165335-11219: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220125165335-11219
	I0125 17:11:18.829031   28290 network_create.go:259] output of [docker network inspect calico-20220125165335-11219]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220125165335-11219
	
	** /stderr **
	I0125 17:11:18.829114   28290 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0125 17:11:18.935212   28290 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003ee5f8] misses:0}
	I0125 17:11:18.935251   28290 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0125 17:11:18.935267   28290 network_create.go:106] attempt to create docker network calico-20220125165335-11219 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0125 17:11:18.935354   28290 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220125165335-11219
	W0125 17:11:19.041581   28290 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220125165335-11219 returned with exit code 1
	W0125 17:11:19.041619   28290 network_create.go:98] failed to create docker network calico-20220125165335-11219 192.168.49.0/24, will retry: subnet is taken
	I0125 17:11:19.041839   28290 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ee5f8] amended:false}} dirty:map[] misses:0}
	I0125 17:11:19.041854   28290 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0125 17:11:19.042019   28290 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003ee5f8] amended:true}} dirty:map[192.168.49.0:0xc0003ee5f8 192.168.58.0:0xc0006502e0] misses:0}
	I0125 17:11:19.042031   28290 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0125 17:11:19.042037   28290 network_create.go:106] attempt to create docker network calico-20220125165335-11219 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0125 17:11:19.042112   28290 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220125165335-11219
	I0125 17:11:24.048236   28290 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220125165335-11219: (5.005559909s)
	I0125 17:11:24.048262   28290 network_create.go:90] docker network calico-20220125165335-11219 192.168.58.0/24 created
	I0125 17:11:24.048279   28290 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20220125165335-11219" container
	I0125 17:11:24.048408   28290 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0125 17:11:24.168690   28290 cli_runner.go:133] Run: docker volume create calico-20220125165335-11219 --label name.minikube.sigs.k8s.io=calico-20220125165335-11219 --label created_by.minikube.sigs.k8s.io=true
	I0125 17:11:24.280494   28290 oci.go:102] Successfully created a docker volume calico-20220125165335-11219
	I0125 17:11:24.280628   28290 cli_runner.go:133] Run: docker run --rm --name calico-20220125165335-11219-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220125165335-11219 --entrypoint /usr/bin/test -v calico-20220125165335-11219:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I0125 17:11:24.812931   28290 oci.go:106] Successfully prepared a docker volume calico-20220125165335-11219
	I0125 17:11:24.812977   28290 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0125 17:11:24.812989   28290 kic.go:179] Starting extracting preloaded images to volume ...
	I0125 17:11:24.813090   28290 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220125165335-11219:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I0125 17:11:31.374114   28290 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220125165335-11219:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (6.560481452s)
	I0125 17:11:31.374140   28290 kic.go:188] duration metric: took 6.560685 seconds to extract preloaded images to volume
	I0125 17:11:31.374268   28290 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0125 17:11:31.543145   28290 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220125165335-11219 --name calico-20220125165335-11219 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220125165335-11219 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220125165335-11219 --network calico-20220125165335-11219 --ip 192.168.58.2 --volume calico-20220125165335-11219:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I0125 17:11:37.842532   28290 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220125165335-11219 --name calico-20220125165335-11219 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220125165335-11219 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220125165335-11219 --network calico-20220125165335-11219 --ip 192.168.58.2 --volume calico-20220125165335-11219:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: (6.299002888s)
	I0125 17:11:37.842646   28290 cli_runner.go:133] Run: docker container inspect calico-20220125165335-11219 --format={{.State.Running}}
	I0125 17:11:37.978317   28290 cli_runner.go:133] Run: docker container inspect calico-20220125165335-11219 --format={{.State.Status}}
	I0125 17:11:38.095173   28290 cli_runner.go:133] Run: docker exec calico-20220125165335-11219 stat /var/lib/dpkg/alternatives/iptables
	I0125 17:11:38.266972   28290 oci.go:281] the created container "calico-20220125165335-11219" has a running status.
	I0125 17:11:38.267003   28290 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/calico-20220125165335-11219/id_rsa...
	I0125 17:11:38.440213   28290 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/calico-20220125165335-11219/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0125 17:11:38.629712   28290 cli_runner.go:133] Run: docker container inspect calico-20220125165335-11219 --format={{.State.Status}}
	I0125 17:11:38.746228   28290 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0125 17:11:38.746249   28290 kic_runner.go:114] Args: [docker exec --privileged calico-20220125165335-11219 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0125 17:11:38.914340   28290 cli_runner.go:133] Run: docker container inspect calico-20220125165335-11219 --format={{.State.Status}}
	I0125 17:11:39.027127   28290 machine.go:88] provisioning docker machine ...
	I0125 17:11:39.027179   28290 ubuntu.go:169] provisioning hostname "calico-20220125165335-11219"
	I0125 17:11:39.027286   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:11:39.141839   28290 main.go:130] libmachine: Using SSH client type: native
	I0125 17:11:39.142051   28290 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 57307 <nil> <nil>}
	I0125 17:11:39.142065   28290 main.go:130] libmachine: About to run SSH command:
	sudo hostname calico-20220125165335-11219 && echo "calico-20220125165335-11219" | sudo tee /etc/hostname
	I0125 17:11:39.287985   28290 main.go:130] libmachine: SSH cmd err, output: <nil>: calico-20220125165335-11219
	
	I0125 17:11:39.288097   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:11:39.403126   28290 main.go:130] libmachine: Using SSH client type: native
	I0125 17:11:39.403299   28290 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 57307 <nil> <nil>}
	I0125 17:11:39.403330   28290 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220125165335-11219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220125165335-11219/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220125165335-11219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0125 17:11:39.542188   28290 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0125 17:11:39.542218   28290 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube}
	I0125 17:11:39.542245   28290 ubuntu.go:177] setting up certificates
	I0125 17:11:39.542256   28290 provision.go:83] configureAuth start
	I0125 17:11:39.542343   28290 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220125165335-11219
	I0125 17:11:39.653824   28290 provision.go:138] copyHostCerts
	I0125 17:11:39.653919   28290 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem, removing ...
	I0125 17:11:39.653930   28290 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem
	I0125 17:11:39.655162   28290 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem (1082 bytes)
	I0125 17:11:39.655359   28290 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem, removing ...
	I0125 17:11:39.655376   28290 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem
	I0125 17:11:39.655445   28290 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem (1123 bytes)
	I0125 17:11:39.655605   28290 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem, removing ...
	I0125 17:11:39.655612   28290 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem
	I0125 17:11:39.655668   28290 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem (1675 bytes)
	I0125 17:11:39.655787   28290 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem org=jenkins.calico-20220125165335-11219 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220125165335-11219]
	I0125 17:11:39.870906   28290 provision.go:172] copyRemoteCerts
	I0125 17:11:39.871065   28290 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0125 17:11:39.871141   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:11:39.994967   28290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57307 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/calico-20220125165335-11219/id_rsa Username:docker}
	I0125 17:11:40.094965   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0125 17:11:40.115453   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0125 17:11:40.138750   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0125 17:11:40.160670   28290 provision.go:86] duration metric: configureAuth took 618.37229ms
	I0125 17:11:40.160686   28290 ubuntu.go:193] setting minikube options for container-runtime
	I0125 17:11:40.160846   28290 config.go:176] Loaded profile config "calico-20220125165335-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 17:11:40.160923   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:11:40.283405   28290 main.go:130] libmachine: Using SSH client type: native
	I0125 17:11:40.283604   28290 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 57307 <nil> <nil>}
	I0125 17:11:40.283620   28290 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0125 17:11:40.427988   28290 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0125 17:11:40.428011   28290 ubuntu.go:71] root file system type: overlay
	I0125 17:11:40.428147   28290 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0125 17:11:40.428261   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:11:40.544941   28290 main.go:130] libmachine: Using SSH client type: native
	I0125 17:11:40.545099   28290 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 57307 <nil> <nil>}
	I0125 17:11:40.545152   28290 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0125 17:11:40.700550   28290 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0125 17:11:40.700663   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:11:40.823670   28290 main.go:130] libmachine: Using SSH client type: native
	I0125 17:11:40.823881   28290 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 57307 <nil> <nil>}
	I0125 17:11:40.823900   28290 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0125 17:11:53.102701   28290 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-01-26 01:11:40.715760107 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0125 17:11:53.102722   28290 machine.go:91] provisioned docker machine in 14.075179988s
	I0125 17:11:53.102730   28290 client.go:171] LocalClient.Create took 34.484567054s
	I0125 17:11:53.102747   28290 start.go:168] duration metric: libmachine.API.Create for "calico-20220125165335-11219" took 34.484657218s
	I0125 17:11:53.102755   28290 start.go:267] post-start starting for "calico-20220125165335-11219" (driver="docker")
	I0125 17:11:53.102760   28290 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0125 17:11:53.102845   28290 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0125 17:11:53.103590   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:11:53.214407   28290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57307 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/calico-20220125165335-11219/id_rsa Username:docker}
	I0125 17:11:53.311613   28290 ssh_runner.go:195] Run: cat /etc/os-release
	I0125 17:11:53.315442   28290 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0125 17:11:53.315454   28290 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0125 17:11:53.315461   28290 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0125 17:11:53.315467   28290 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0125 17:11:53.315478   28290 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/addons for local assets ...
	I0125 17:11:53.315583   28290 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files for local assets ...
	I0125 17:11:53.316178   28290 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem -> 112192.pem in /etc/ssl/certs
	I0125 17:11:53.316384   28290 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0125 17:11:53.324595   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem --> /etc/ssl/certs/112192.pem (1708 bytes)
	I0125 17:11:53.342287   28290 start.go:270] post-start completed in 239.518397ms
	I0125 17:11:53.343405   28290 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220125165335-11219
	I0125 17:11:53.449571   28290 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/config.json ...
	I0125 17:11:53.449985   28290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0125 17:11:53.450048   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:11:53.555816   28290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57307 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/calico-20220125165335-11219/id_rsa Username:docker}
	I0125 17:11:53.648833   28290 start.go:129] duration metric: createHost completed in 35.117453715s
	I0125 17:11:53.648857   28290 start.go:80] releasing machines lock for "calico-20220125165335-11219", held for 35.117603094s
	I0125 17:11:53.648969   28290 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220125165335-11219
	I0125 17:11:53.758508   28290 ssh_runner.go:195] Run: systemctl --version
	I0125 17:11:53.758586   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:11:53.759205   28290 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0125 17:11:53.759368   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:11:53.873948   28290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57307 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/calico-20220125165335-11219/id_rsa Username:docker}
	I0125 17:11:53.874344   28290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57307 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/calico-20220125165335-11219/id_rsa Username:docker}
	I0125 17:11:53.966964   28290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0125 17:11:54.161146   28290 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0125 17:11:54.170979   28290 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0125 17:11:54.171036   28290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0125 17:11:54.180894   28290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0125 17:11:54.193630   28290 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0125 17:11:54.255447   28290 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0125 17:11:54.316880   28290 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0125 17:11:54.327720   28290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0125 17:11:54.383581   28290 ssh_runner.go:195] Run: sudo systemctl start docker
	I0125 17:11:54.394626   28290 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0125 17:11:54.502211   28290 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0125 17:11:54.609401   28290 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0125 17:11:54.609605   28290 cli_runner.go:133] Run: docker exec -t calico-20220125165335-11219 dig +short host.docker.internal
	I0125 17:11:54.798154   28290 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0125 17:11:54.798705   28290 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0125 17:11:54.803273   28290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0125 17:11:54.812914   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:11:54.946030   28290 out.go:176]   - kubelet.housekeeping-interval=5m
	I0125 17:11:54.946121   28290 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0125 17:11:54.946207   28290 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0125 17:11:54.976269   28290 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0125 17:11:54.976281   28290 docker.go:537] Images already preloaded, skipping extraction
	I0125 17:11:54.976376   28290 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0125 17:11:55.007754   28290 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0125 17:11:55.007769   28290 cache_images.go:84] Images are preloaded, skipping loading
	I0125 17:11:55.007872   28290 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0125 17:11:55.096722   28290 cni.go:93] Creating CNI manager for "calico"
	I0125 17:11:55.096742   28290 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0125 17:11:55.096756   28290 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220125165335-11219 NodeName:calico-20220125165335-11219 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/
minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0125 17:11:55.096868   28290 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20220125165335-11219"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0125 17:11:55.096962   28290 kubeadm.go:791] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20220125165335-11219 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:calico-20220125165335-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0125 17:11:55.097035   28290 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0125 17:11:55.105253   28290 binaries.go:44] Found k8s binaries, skipping transfer
	I0125 17:11:55.105328   28290 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0125 17:11:55.112856   28290 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (401 bytes)
	I0125 17:11:55.126270   28290 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0125 17:11:55.139131   28290 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
	I0125 17:11:55.151928   28290 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0125 17:11:55.156109   28290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0125 17:11:55.165650   28290 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219 for IP: 192.168.58.2
	I0125 17:11:55.165784   28290 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.key
	I0125 17:11:55.165841   28290 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.key
	I0125 17:11:55.165892   28290 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/client.key
	I0125 17:11:55.165907   28290 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/client.crt with IP's: []
	I0125 17:11:55.208600   28290 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/client.crt ...
	I0125 17:11:55.208615   28290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/client.crt: {Name:mkb6954eb897b88ff258a2bf549f3a37ca80d124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:11:55.210464   28290 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/client.key ...
	I0125 17:11:55.210473   28290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/client.key: {Name:mka77f8cb4370d04ffb7eaa539bd1bbc4a2b8735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:11:55.210863   28290 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/apiserver.key.cee25041
	I0125 17:11:55.210885   28290 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0125 17:11:55.337061   28290 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/apiserver.crt.cee25041 ...
	I0125 17:11:55.337077   28290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/apiserver.crt.cee25041: {Name:mk9cb2602d9ab2c6506b2eedbcb7fe822aac5dd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:11:55.338238   28290 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/apiserver.key.cee25041 ...
	I0125 17:11:55.338248   28290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/apiserver.key.cee25041: {Name:mk71616d2056cb517b47ac1e5efd51f8939caec7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:11:55.338441   28290 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/apiserver.crt
	I0125 17:11:55.338608   28290 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/apiserver.key
	I0125 17:11:55.338767   28290 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/proxy-client.key
	I0125 17:11:55.338790   28290 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/proxy-client.crt with IP's: []
	I0125 17:11:55.434504   28290 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/proxy-client.crt ...
	I0125 17:11:55.434518   28290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/proxy-client.crt: {Name:mk224235c36626b39c729f3d895ab4eed91a04fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:11:55.435907   28290 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/proxy-client.key ...
	I0125 17:11:55.435929   28290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/proxy-client.key: {Name:mkfd721f4c97380421a5b9c87dc3613a312d3583 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:11:55.436840   28290 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219.pem (1338 bytes)
	W0125 17:11:55.436913   28290 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219_empty.pem, impossibly tiny 0 bytes
	I0125 17:11:55.436926   28290 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem (1675 bytes)
	I0125 17:11:55.436966   28290 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem (1082 bytes)
	I0125 17:11:55.437003   28290 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem (1123 bytes)
	I0125 17:11:55.437093   28290 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem (1675 bytes)
	I0125 17:11:55.437179   28290 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem (1708 bytes)
	I0125 17:11:55.438025   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0125 17:11:55.457380   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0125 17:11:55.475003   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0125 17:11:55.492597   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/calico-20220125165335-11219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0125 17:11:55.510678   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0125 17:11:55.527836   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0125 17:11:55.546087   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0125 17:11:55.581519   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0125 17:11:55.605433   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219.pem --> /usr/share/ca-certificates/11219.pem (1338 bytes)
	I0125 17:11:55.628383   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem --> /usr/share/ca-certificates/112192.pem (1708 bytes)
	I0125 17:11:55.650411   28290 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0125 17:11:55.672465   28290 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0125 17:11:55.691010   28290 ssh_runner.go:195] Run: openssl version
	I0125 17:11:55.699911   28290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11219.pem && ln -fs /usr/share/ca-certificates/11219.pem /etc/ssl/certs/11219.pem"
	I0125 17:11:55.711450   28290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11219.pem
	I0125 17:11:55.716761   28290 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 26 00:05 /usr/share/ca-certificates/11219.pem
	I0125 17:11:55.716829   28290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11219.pem
	I0125 17:11:55.725303   28290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11219.pem /etc/ssl/certs/51391683.0"
	I0125 17:11:55.735121   28290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112192.pem && ln -fs /usr/share/ca-certificates/112192.pem /etc/ssl/certs/112192.pem"
	I0125 17:11:55.748148   28290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112192.pem
	I0125 17:11:55.753382   28290 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 26 00:05 /usr/share/ca-certificates/112192.pem
	I0125 17:11:55.753501   28290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112192.pem
	I0125 17:11:55.760451   28290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112192.pem /etc/ssl/certs/3ec20f2e.0"
	I0125 17:11:55.770419   28290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0125 17:11:55.781103   28290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0125 17:11:55.786965   28290 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 26 00:00 /usr/share/ca-certificates/minikubeCA.pem
	I0125 17:11:55.787019   28290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0125 17:11:55.793403   28290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0125 17:11:55.803252   28290 kubeadm.go:388] StartCluster: {Name:calico-20220125165335-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:calico-20220125165335-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0125 17:11:55.803390   28290 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0125 17:11:55.840974   28290 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0125 17:11:55.849722   28290 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0125 17:11:55.859608   28290 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0125 17:11:55.859673   28290 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0125 17:11:55.868758   28290 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0125 17:11:55.868790   28290 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0125 17:11:56.491720   28290 out.go:203]   - Generating certificates and keys ...
	I0125 17:11:59.335364   28290 out.go:203]   - Booting up control plane ...
	I0125 17:12:07.875778   28290 out.go:203]   - Configuring RBAC rules ...
	I0125 17:12:08.260675   28290 cni.go:93] Creating CNI manager for "calico"
	I0125 17:12:08.303410   28290 out.go:176] * Configuring Calico (Container Networking Interface) ...
	I0125 17:12:08.303727   28290 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2/kubectl ...
	I0125 17:12:08.303737   28290 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0125 17:12:08.346599   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0125 17:12:09.537871   28290 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.191238722s)
	I0125 17:12:09.537901   28290 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0125 17:12:09.538021   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:09.538034   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=f2b90e74c34b616e7f63aca230995ce4db99c965 minikube.k8s.io/name=calico-20220125165335-11219 minikube.k8s.io/updated_at=2022_01_25T17_12_09_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:09.638428   28290 ops.go:34] apiserver oom_adj: -16
	I0125 17:12:09.638524   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:10.198213   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:10.698207   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:11.198524   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:11.702211   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:12.199308   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:12.698134   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:13.201779   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:13.702671   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:14.198238   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:14.698255   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:15.198293   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:15.704078   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:16.198284   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:16.698339   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:17.198338   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:17.698358   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:18.202887   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:18.698285   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:19.199656   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:19.698424   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:20.198351   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:20.698368   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:21.198361   28290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:12:21.255232   28290 kubeadm.go:867] duration metric: took 11.717190124s to wait for elevateKubeSystemPrivileges.
	I0125 17:12:21.255251   28290 kubeadm.go:390] StartCluster complete in 25.451692832s
	I0125 17:12:21.255261   28290 settings.go:142] acquiring lock: {Name:mk4b38f66d2c1d7ad910ce332a6e0f9663533ce8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:12:21.255351   28290 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 17:12:21.255913   28290 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig: {Name:mk22ac11166e634b93c7a48f1f20a682ee77d8e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:12:21.776923   28290 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220125165335-11219" rescaled to 1
	I0125 17:12:21.777001   28290 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}
	I0125 17:12:21.777032   28290 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0125 17:12:21.821988   28290 out.go:176] * Verifying Kubernetes components...
	I0125 17:12:21.777035   28290 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0125 17:12:21.777189   28290 config.go:176] Loaded profile config "calico-20220125165335-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 17:12:21.822157   28290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0125 17:12:21.822172   28290 addons.go:65] Setting default-storageclass=true in profile "calico-20220125165335-11219"
	I0125 17:12:21.822172   28290 addons.go:65] Setting storage-provisioner=true in profile "calico-20220125165335-11219"
	I0125 17:12:21.822222   28290 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220125165335-11219"
	I0125 17:12:21.822282   28290 addons.go:153] Setting addon storage-provisioner=true in "calico-20220125165335-11219"
	W0125 17:12:21.822311   28290 addons.go:165] addon storage-provisioner should already be in state true
	I0125 17:12:21.822352   28290 host.go:66] Checking if "calico-20220125165335-11219" exists ...
	I0125 17:12:21.823052   28290 cli_runner.go:133] Run: docker container inspect calico-20220125165335-11219 --format={{.State.Status}}
	I0125 17:12:21.838368   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:12:21.859103   28290 cli_runner.go:133] Run: docker container inspect calico-20220125165335-11219 --format={{.State.Status}}
	I0125 17:12:21.937606   28290 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0125 17:12:22.000260   28290 node_ready.go:35] waiting up to 5m0s for node "calico-20220125165335-11219" to be "Ready" ...
	I0125 17:12:22.022311   28290 addons.go:153] Setting addon default-storageclass=true in "calico-20220125165335-11219"
	I0125 17:12:22.038878   28290 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0125 17:12:22.038932   28290 addons.go:165] addon default-storageclass should already be in state true
	I0125 17:12:22.023263   28290 node_ready.go:49] node "calico-20220125165335-11219" has status "Ready":"True"
	I0125 17:12:22.038961   28290 host.go:66] Checking if "calico-20220125165335-11219" exists ...
	I0125 17:12:22.038969   28290 node_ready.go:38] duration metric: took 38.684776ms waiting for node "calico-20220125165335-11219" to be "Ready" ...
	I0125 17:12:22.038987   28290 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0125 17:12:22.039034   28290 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0125 17:12:22.039043   28290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0125 17:12:22.039136   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:12:22.040363   28290 cli_runner.go:133] Run: docker container inspect calico-20220125165335-11219 --format={{.State.Status}}
	I0125 17:12:22.055818   28290 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace to be "Ready" ...
	I0125 17:12:22.194376   28290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57307 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/calico-20220125165335-11219/id_rsa Username:docker}
	I0125 17:12:22.194423   28290 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0125 17:12:22.194454   28290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0125 17:12:22.194546   28290 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220125165335-11219
	I0125 17:12:22.351981   28290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0125 17:12:22.364517   28290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57307 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/calico-20220125165335-11219/id_rsa Username:docker}
	I0125 17:12:22.541273   28290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0125 17:12:23.274238   28290 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.336578424s)
	I0125 17:12:23.274258   28290 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0125 17:12:23.355798   28290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.003771306s)
	I0125 17:12:23.384236   28290 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0125 17:12:23.384252   28290 addons.go:417] enableAddons completed in 1.607213109s
	I0125 17:12:24.122960   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:26.577247   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:28.580785   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:31.076796   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:33.079840   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:35.615461   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:38.082545   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:40.581863   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:43.076599   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:45.080060   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:47.581978   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:50.075897   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:52.077606   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:54.078866   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:56.079575   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:12:58.577182   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:00.598236   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:03.076531   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:05.078175   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:07.579216   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:10.077818   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:12.576758   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:14.576983   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:17.079917   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:19.087637   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:21.585886   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:24.084188   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:26.583341   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:29.080527   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:31.082587   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:33.576903   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:36.077549   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:38.078463   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:40.582709   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:43.086088   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:45.577735   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:47.578564   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:50.080096   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:52.083231   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:54.085219   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:56.577772   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:13:58.583240   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:01.083663   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:03.086662   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:05.576622   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:08.077832   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:10.084147   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:12.583050   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:15.077011   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:17.080654   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:19.582639   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:22.076848   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:24.076965   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:26.077573   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:28.079628   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:30.577839   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:32.579166   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:34.581050   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:36.581399   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:39.081973   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:41.576902   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:43.580575   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:46.078265   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:48.578403   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:50.580353   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:53.080496   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:55.579208   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:14:58.080858   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:00.613691   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:03.076347   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:05.085822   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:07.582714   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:10.079170   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:12.577850   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:14.578016   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:16.578518   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:18.580320   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:20.581087   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:23.076948   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:25.078360   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:27.079422   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:29.578130   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:32.090621   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:34.578794   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:36.579341   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:39.080675   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:41.088852   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:43.579122   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:45.580201   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:48.079364   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:50.579737   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:53.077815   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:55.083190   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:57.576851   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:15:59.578680   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:01.579860   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:03.580175   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:06.077662   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:08.078084   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:10.079622   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:12.082229   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:14.581357   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:17.078574   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:19.081833   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:21.582741   28290 pod_ready.go:102] pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:22.084417   28290 pod_ready.go:81] duration metric: took 4m0.02682966s waiting for pod "calico-kube-controllers-8594699699-blr9j" in "kube-system" namespace to be "Ready" ...
	E0125 17:16:22.084429   28290 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0125 17:16:22.084438   28290 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-w6fvf" in "kube-system" namespace to be "Ready" ...
	I0125 17:16:24.096226   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:26.097543   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:28.596557   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:30.599484   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:32.603714   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:35.097402   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:37.601408   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:39.601620   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:42.098125   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:44.602317   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:47.096828   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:49.597757   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:52.096403   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:54.097592   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:56.101293   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:16:58.599093   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:00.602422   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:03.101493   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:05.598718   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:08.097086   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:10.596843   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:12.599267   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:15.097895   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:17.596496   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:20.101486   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:22.597690   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:25.102218   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:27.604849   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:30.102625   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:32.598553   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:34.601038   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:37.098293   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:39.596235   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:41.597133   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:43.597699   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:46.104379   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:48.104488   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:50.599728   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:53.105046   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:55.605181   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:17:58.099315   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:00.105074   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:02.602014   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:05.106613   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:07.596483   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:09.599983   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:11.602980   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:14.105766   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:16.597700   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:18.604680   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:21.099741   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:23.106597   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:25.600178   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:28.097365   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:30.600102   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:33.102284   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:35.604063   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:38.097291   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:40.097690   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:42.098995   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:44.596857   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:46.603309   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:49.096822   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:51.102475   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:53.597999   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:55.598157   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:18:57.605177   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:00.101198   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:02.603306   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:05.103845   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:07.603531   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:10.099330   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:12.105740   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:14.605250   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:17.098636   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:19.598017   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:21.598463   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:23.600926   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:26.097710   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:28.103172   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:30.604664   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:32.605582   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:35.098931   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:37.102333   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:39.109318   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:41.600890   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:44.099712   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:46.601579   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:49.098238   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:51.596749   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:53.601446   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:56.098633   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:19:58.104176   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:20:00.606442   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:20:03.104690   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:20:05.607561   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:20:08.101906   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:20:10.600332   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:20:13.098557   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:20:15.105131   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:20:17.604131   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:20:20.100419   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:20:22.103703   28290 pod_ready.go:102] pod "calico-node-w6fvf" in "kube-system" namespace has status "Ready":"False"
	I0125 17:20:22.103715   28290 pod_ready.go:81] duration metric: took 4m0.017566851s waiting for pod "calico-node-w6fvf" in "kube-system" namespace to be "Ready" ...
	E0125 17:20:22.103720   28290 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0125 17:20:22.103734   28290 pod_ready.go:38] duration metric: took 8m0.061288188s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0125 17:20:22.131591   28290 out.go:176] 
	W0125 17:20:22.131729   28290 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0125 17:20:22.131743   28290 out.go:241] * 
	* 
	W0125 17:20:22.132767   28290 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0125 17:20:22.204154   28290 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (544.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (364.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:15:03.989253   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137797762s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.153620374s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147892377s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:15:51.714638   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:15:51.720593   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:15:51.730855   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:15:51.754603   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:15:51.795245   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:15:51.875502   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:15:52.039286   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:15:52.359838   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:15:53.001409   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:15:54.289416   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:15:56.852247   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.124569484s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0125 17:16:01.974209   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:16:12.215245   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:16:15.454218   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:16:15.459962   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:16:15.470052   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:16:15.490836   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:16:15.534782   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:16:15.616785   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:16:15.776919   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:16:16.097193   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:16:16.738223   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.151893317s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0125 17:16:18.019305   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:16:20.583636   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:16:25.704913   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:16:25.912456   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
E0125 17:16:32.702074   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:16:35.953281   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141635983s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:16:54.009785   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 17:16:56.434229   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.124724909s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0125 17:17:13.663905   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12898635s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0125 17:17:37.394855   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:17:44.640299   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130961265s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0125 17:18:24.952496   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:18:24.959062   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:18:24.971822   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:18:24.992238   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:18:25.032461   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:18:25.113118   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:18:25.281357   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:18:25.601480   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:18:26.241623   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:18:27.529935   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:18:30.090747   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:18:35.218167   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:18:35.584683   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137113524s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0125 17:18:42.026394   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
E0125 17:18:45.463296   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:18:59.315686   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:19:04.981507   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 17:19:05.950774   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:19:09.763511   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128595403s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0125 17:19:46.917403   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:20:51.715098   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.158100456s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (364.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (310.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-20220125165335-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:99: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kindnet-20220125165335-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : exit status 80 (5m10.318941223s)

                                                
                                                
-- stdout --
	* [kindnet-20220125165335-11219] minikube v1.25.1 on Darwin 11.1
	  - MINIKUBE_LOCATION=13326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node kindnet-20220125165335-11219 in cluster kindnet-20220125165335-11219
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	  - kubelet.housekeeping-interval=5m
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0125 17:20:43.701450   29351 out.go:297] Setting OutFile to fd 1 ...
	I0125 17:20:43.701585   29351 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 17:20:43.701590   29351 out.go:310] Setting ErrFile to fd 2...
	I0125 17:20:43.701594   29351 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 17:20:43.701675   29351 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/bin
	I0125 17:20:43.702010   29351 out.go:304] Setting JSON to false
	I0125 17:20:43.729625   29351 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":10218,"bootTime":1643149825,"procs":318,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0125 17:20:43.729707   29351 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0125 17:20:43.762366   29351 out.go:176] * [kindnet-20220125165335-11219] minikube v1.25.1 on Darwin 11.1
	I0125 17:20:43.762549   29351 notify.go:174] Checking for updates...
	I0125 17:20:43.809555   29351 out.go:176]   - MINIKUBE_LOCATION=13326
	I0125 17:20:43.835567   29351 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 17:20:43.861305   29351 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0125 17:20:43.887339   29351 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0125 17:20:43.913307   29351 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	I0125 17:20:43.913857   29351 config.go:176] Loaded profile config "enable-default-cni-20220125165334-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 17:20:43.913935   29351 driver.go:344] Setting default libvirt URI to qemu:///system
	I0125 17:20:44.008000   29351 docker.go:132] docker version: linux-20.10.5
	I0125 17:20:44.008151   29351 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 17:20:44.167632   29351 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-26 01:20:44.114429711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 17:20:44.214660   29351 out.go:176] * Using the docker driver based on user configuration
	I0125 17:20:44.214718   29351 start.go:280] selected driver: docker
	I0125 17:20:44.214728   29351 start.go:795] validating driver "docker" against <nil>
	I0125 17:20:44.214760   29351 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0125 17:20:44.218752   29351 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 17:20:44.386328   29351 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-26 01:20:44.334812437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 17:20:44.386454   29351 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0125 17:20:44.386570   29351 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0125 17:20:44.386590   29351 start_flags.go:828] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0125 17:20:44.386607   29351 cni.go:93] Creating CNI manager for "kindnet"
	I0125 17:20:44.386638   29351 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0125 17:20:44.386648   29351 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0125 17:20:44.386656   29351 start_flags.go:297] Found "CNI" CNI - setting NetworkPlugin=cni
	I0125 17:20:44.386666   29351 start_flags.go:302] config:
	{Name:kindnet-20220125165335-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:kindnet-20220125165335-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0125 17:20:44.435037   29351 out.go:176] * Starting control plane node kindnet-20220125165335-11219 in cluster kindnet-20220125165335-11219
	I0125 17:20:44.435112   29351 cache.go:120] Beginning downloading kic base image for docker with docker
	I0125 17:20:44.461234   29351 out.go:176] * Pulling base image ...
	I0125 17:20:44.461351   29351 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0125 17:20:44.461351   29351 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0125 17:20:44.461443   29351 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4
	I0125 17:20:44.461480   29351 cache.go:57] Caching tarball of preloaded images
	I0125 17:20:44.461704   29351 preload.go:174] Found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0125 17:20:44.462384   29351 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.2 on docker
	I0125 17:20:44.462908   29351 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/config.json ...
	I0125 17:20:44.462993   29351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/config.json: {Name:mk455b87d50ebe25af5284bf5f73990aad6cc24f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:20:44.573123   29351 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon, skipping pull
	I0125 17:20:44.573159   29351 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in daemon, skipping load
	I0125 17:20:44.573171   29351 cache.go:208] Successfully downloaded all kic artifacts
	I0125 17:20:44.573223   29351 start.go:313] acquiring machines lock for kindnet-20220125165335-11219: {Name:mk9f07e28a643b4bffc85646806afe64db9cc69b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 17:20:44.574180   29351 start.go:317] acquired machines lock for "kindnet-20220125165335-11219" in 946.441µs
	I0125 17:20:44.574216   29351 start.go:89] Provisioning new machine with config: &{Name:kindnet-20220125165335-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:kindnet-20220125165335-11219 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}
	I0125 17:20:44.574299   29351 start.go:126] createHost starting for "" (driver="docker")
	I0125 17:20:44.621547   29351 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0125 17:20:44.621946   29351 start.go:160] libmachine.API.Create for "kindnet-20220125165335-11219" (driver="docker")
	I0125 17:20:44.621998   29351 client.go:168] LocalClient.Create starting
	I0125 17:20:44.622149   29351 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem
	I0125 17:20:44.622220   29351 main.go:130] libmachine: Decoding PEM data...
	I0125 17:20:44.622260   29351 main.go:130] libmachine: Parsing certificate...
	I0125 17:20:44.622371   29351 main.go:130] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem
	I0125 17:20:44.622428   29351 main.go:130] libmachine: Decoding PEM data...
	I0125 17:20:44.622452   29351 main.go:130] libmachine: Parsing certificate...
	I0125 17:20:44.623161   29351 cli_runner.go:133] Run: docker network inspect kindnet-20220125165335-11219 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0125 17:20:44.735457   29351 cli_runner.go:180] docker network inspect kindnet-20220125165335-11219 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0125 17:20:44.735563   29351 network_create.go:254] running [docker network inspect kindnet-20220125165335-11219] to gather additional debugging logs...
	I0125 17:20:44.735581   29351 cli_runner.go:133] Run: docker network inspect kindnet-20220125165335-11219
	W0125 17:20:44.842722   29351 cli_runner.go:180] docker network inspect kindnet-20220125165335-11219 returned with exit code 1
	I0125 17:20:44.842745   29351 network_create.go:257] error running [docker network inspect kindnet-20220125165335-11219]: docker network inspect kindnet-20220125165335-11219: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220125165335-11219
	I0125 17:20:44.842759   29351 network_create.go:259] output of [docker network inspect kindnet-20220125165335-11219]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220125165335-11219
	
	** /stderr **
	I0125 17:20:44.842846   29351 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0125 17:20:44.948984   29351 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000658650] misses:0}
	I0125 17:20:44.949023   29351 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0125 17:20:44.949042   29351 network_create.go:106] attempt to create docker network kindnet-20220125165335-11219 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0125 17:20:44.949129   29351 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220125165335-11219
	W0125 17:20:45.056684   29351 cli_runner.go:180] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220125165335-11219 returned with exit code 1
	W0125 17:20:45.056725   29351 network_create.go:98] failed to create docker network kindnet-20220125165335-11219 192.168.49.0/24, will retry: subnet is taken
	I0125 17:20:45.056941   29351 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000658650] amended:false}} dirty:map[] misses:0}
	I0125 17:20:45.056955   29351 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0125 17:20:45.057133   29351 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000658650] amended:true}} dirty:map[192.168.49.0:0xc000658650 192.168.58.0:0xc00069a448] misses:0}
	I0125 17:20:45.057146   29351 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0125 17:20:45.057153   29351 network_create.go:106] attempt to create docker network kindnet-20220125165335-11219 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0125 17:20:45.057225   29351 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220125165335-11219
	I0125 17:20:50.848394   29351 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220125165335-11219: (5.791092705s)
	I0125 17:20:50.848416   29351 network_create.go:90] docker network kindnet-20220125165335-11219 192.168.58.0/24 created
	I0125 17:20:50.848435   29351 kic.go:106] calculated static IP "192.168.58.2" for the "kindnet-20220125165335-11219" container
	I0125 17:20:50.848550   29351 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
	I0125 17:20:50.958506   29351 cli_runner.go:133] Run: docker volume create kindnet-20220125165335-11219 --label name.minikube.sigs.k8s.io=kindnet-20220125165335-11219 --label created_by.minikube.sigs.k8s.io=true
	I0125 17:20:51.066869   29351 oci.go:102] Successfully created a docker volume kindnet-20220125165335-11219
	I0125 17:20:51.066989   29351 cli_runner.go:133] Run: docker run --rm --name kindnet-20220125165335-11219-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220125165335-11219 --entrypoint /usr/bin/test -v kindnet-20220125165335-11219:/var gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -d /var/lib
	I0125 17:20:51.594688   29351 oci.go:106] Successfully prepared a docker volume kindnet-20220125165335-11219
	I0125 17:20:51.594775   29351 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0125 17:20:51.594797   29351 kic.go:179] Starting extracting preloaded images to volume ...
	I0125 17:20:51.594936   29351 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220125165335-11219:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir
	I0125 17:20:56.962344   29351 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220125165335-11219:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b -I lz4 -xf /preloaded.tar -C /extractDir: (5.367323345s)
	I0125 17:20:56.962368   29351 kic.go:188] duration metric: took 5.367530 seconds to extract preloaded images to volume
	I0125 17:20:56.962478   29351 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0125 17:20:57.118425   29351 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220125165335-11219 --name kindnet-20220125165335-11219 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220125165335-11219 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220125165335-11219 --network kindnet-20220125165335-11219 --ip 192.168.58.2 --volume kindnet-20220125165335-11219:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b
	I0125 17:21:08.882457   29351 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220125165335-11219 --name kindnet-20220125165335-11219 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220125165335-11219 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220125165335-11219 --network kindnet-20220125165335-11219 --ip 192.168.58.2 --volume kindnet-20220125165335-11219:/var --security-opt apparmor=unconfined --memory=2048mb --memory-swap=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b: (11.763887954s)
	I0125 17:21:08.883256   29351 cli_runner.go:133] Run: docker container inspect kindnet-20220125165335-11219 --format={{.State.Running}}
	I0125 17:21:08.999705   29351 cli_runner.go:133] Run: docker container inspect kindnet-20220125165335-11219 --format={{.State.Status}}
	I0125 17:21:09.113943   29351 cli_runner.go:133] Run: docker exec kindnet-20220125165335-11219 stat /var/lib/dpkg/alternatives/iptables
	I0125 17:21:09.282521   29351 oci.go:281] the created container "kindnet-20220125165335-11219" has a running status.
	I0125 17:21:09.282563   29351 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/kindnet-20220125165335-11219/id_rsa...
	I0125 17:21:09.397306   29351 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/kindnet-20220125165335-11219/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0125 17:21:09.602932   29351 cli_runner.go:133] Run: docker container inspect kindnet-20220125165335-11219 --format={{.State.Status}}
	I0125 17:21:09.715666   29351 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0125 17:21:09.715688   29351 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220125165335-11219 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0125 17:21:09.885902   29351 cli_runner.go:133] Run: docker container inspect kindnet-20220125165335-11219 --format={{.State.Status}}
	I0125 17:21:09.996277   29351 machine.go:88] provisioning docker machine ...
	I0125 17:21:09.996319   29351 ubuntu.go:169] provisioning hostname "kindnet-20220125165335-11219"
	I0125 17:21:09.996435   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:10.115232   29351 main.go:130] libmachine: Using SSH client type: native
	I0125 17:21:10.115526   29351 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 59241 <nil> <nil>}
	I0125 17:21:10.115546   29351 main.go:130] libmachine: About to run SSH command:
	sudo hostname kindnet-20220125165335-11219 && echo "kindnet-20220125165335-11219" | sudo tee /etc/hostname
	I0125 17:21:10.262927   29351 main.go:130] libmachine: SSH cmd err, output: <nil>: kindnet-20220125165335-11219
	
	I0125 17:21:10.263030   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:10.383219   29351 main.go:130] libmachine: Using SSH client type: native
	I0125 17:21:10.383393   29351 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 59241 <nil> <nil>}
	I0125 17:21:10.383417   29351 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20220125165335-11219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220125165335-11219/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20220125165335-11219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0125 17:21:10.524102   29351 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0125 17:21:10.524129   29351 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube CaCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem ServerCertRemotePath:/etc/doc
ker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube}
	I0125 17:21:10.524153   29351 ubuntu.go:177] setting up certificates
	I0125 17:21:10.524165   29351 provision.go:83] configureAuth start
	I0125 17:21:10.524285   29351 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220125165335-11219
	I0125 17:21:10.646474   29351 provision.go:138] copyHostCerts
	I0125 17:21:10.646583   29351 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem, removing ...
	I0125 17:21:10.646595   29351 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem
	I0125 17:21:10.647151   29351 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/key.pem (1675 bytes)
	I0125 17:21:10.647367   29351 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem, removing ...
	I0125 17:21:10.647384   29351 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem
	I0125 17:21:10.647447   29351 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.pem (1082 bytes)
	I0125 17:21:10.647609   29351 exec_runner.go:144] found /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem, removing ...
	I0125 17:21:10.647616   29351 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem
	I0125 17:21:10.647680   29351 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cert.pem (1123 bytes)
	I0125 17:21:10.647831   29351 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220125165335-11219 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220125165335-11219]
	I0125 17:21:10.716764   29351 provision.go:172] copyRemoteCerts
	I0125 17:21:10.716900   29351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0125 17:21:10.716973   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:10.838003   29351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59241 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/kindnet-20220125165335-11219/id_rsa Username:docker}
	I0125 17:21:10.936323   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0125 17:21:10.957568   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0125 17:21:10.978094   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0125 17:21:10.998121   29351 provision.go:86] duration metric: configureAuth took 473.941018ms
	I0125 17:21:10.998137   29351 ubuntu.go:193] setting minikube options for container-runtime
	I0125 17:21:10.998293   29351 config.go:176] Loaded profile config "kindnet-20220125165335-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 17:21:10.998403   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:11.116226   29351 main.go:130] libmachine: Using SSH client type: native
	I0125 17:21:11.116381   29351 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 59241 <nil> <nil>}
	I0125 17:21:11.116398   29351 main.go:130] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0125 17:21:11.265606   29351 main.go:130] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0125 17:21:11.265620   29351 ubuntu.go:71] root file system type: overlay
	I0125 17:21:11.265772   29351 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0125 17:21:11.265878   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:11.376888   29351 main.go:130] libmachine: Using SSH client type: native
	I0125 17:21:11.377054   29351 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 59241 <nil> <nil>}
	I0125 17:21:11.377113   29351 main.go:130] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0125 17:21:11.521681   29351 main.go:130] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0125 17:21:11.521786   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:11.634657   29351 main.go:130] libmachine: Using SSH client type: native
	I0125 17:21:11.634808   29351 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1397a40] 0x139ab20 <nil>  [] 0s} 127.0.0.1 59241 <nil> <nil>}
	I0125 17:21:11.634824   29351 main.go:130] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0125 17:21:24.531869   29351 main.go:130] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-12-13 11:43:42.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-01-26 01:21:11.519195002 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0125 17:21:24.531897   29351 machine.go:91] provisioned docker machine in 14.53549548s
	I0125 17:21:24.531905   29351 client.go:171] LocalClient.Create took 39.909614806s
	I0125 17:21:24.531923   29351 start.go:168] duration metric: libmachine.API.Create for "kindnet-20220125165335-11219" took 39.90969656s
	I0125 17:21:24.531933   29351 start.go:267] post-start starting for "kindnet-20220125165335-11219" (driver="docker")
	I0125 17:21:24.531937   29351 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0125 17:21:24.532023   29351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0125 17:21:24.532094   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:24.652703   29351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59241 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/kindnet-20220125165335-11219/id_rsa Username:docker}
	I0125 17:21:24.750621   29351 ssh_runner.go:195] Run: cat /etc/os-release
	I0125 17:21:24.755010   29351 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0125 17:21:24.755033   29351 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0125 17:21:24.755042   29351 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0125 17:21:24.755049   29351 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0125 17:21:24.755059   29351 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/addons for local assets ...
	I0125 17:21:24.755167   29351 filesync.go:126] Scanning /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files for local assets ...
	I0125 17:21:24.755796   29351 filesync.go:149] local asset: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem -> 112192.pem in /etc/ssl/certs
	I0125 17:21:24.756009   29351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0125 17:21:24.767116   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem --> /etc/ssl/certs/112192.pem (1708 bytes)
	I0125 17:21:24.790357   29351 start.go:270] post-start completed in 258.413914ms
	I0125 17:21:24.791442   29351 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220125165335-11219
	I0125 17:21:24.916796   29351 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/config.json ...
	I0125 17:21:24.917294   29351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0125 17:21:24.917371   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:25.039907   29351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59241 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/kindnet-20220125165335-11219/id_rsa Username:docker}
	I0125 17:21:25.136562   29351 start.go:129] duration metric: createHost completed in 40.561964209s
	I0125 17:21:25.136585   29351 start.go:80] releasing machines lock for "kindnet-20220125165335-11219", held for 40.562109663s
	I0125 17:21:25.136683   29351 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220125165335-11219
	I0125 17:21:25.254641   29351 ssh_runner.go:195] Run: systemctl --version
	I0125 17:21:25.254714   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:25.255330   29351 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0125 17:21:25.255559   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:25.382952   29351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59241 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/kindnet-20220125165335-11219/id_rsa Username:docker}
	I0125 17:21:25.382953   29351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59241 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/kindnet-20220125165335-11219/id_rsa Username:docker}
	I0125 17:21:25.678824   29351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0125 17:21:25.689428   29351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0125 17:21:25.701801   29351 cruntime.go:272] skipping containerd shutdown because we are bound to it
	I0125 17:21:25.701859   29351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0125 17:21:25.711889   29351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0125 17:21:25.729481   29351 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0125 17:21:25.789434   29351 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0125 17:21:25.850880   29351 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0125 17:21:25.862318   29351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0125 17:21:25.920807   29351 ssh_runner.go:195] Run: sudo systemctl start docker
	I0125 17:21:25.930753   29351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0125 17:21:25.972736   29351 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0125 17:21:26.063547   29351 out.go:203] * Preparing Kubernetes v1.23.2 on Docker 20.10.12 ...
	I0125 17:21:26.063740   29351 cli_runner.go:133] Run: docker exec -t kindnet-20220125165335-11219 dig +short host.docker.internal
	I0125 17:21:26.227170   29351 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0125 17:21:26.228244   29351 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0125 17:21:26.232805   29351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0125 17:21:26.242696   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:26.389524   29351 out.go:176]   - kubelet.housekeeping-interval=5m
	I0125 17:21:26.423255   29351 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0125 17:21:26.423349   29351 preload.go:132] Checking if preload exists for k8s version v1.23.2 and runtime docker
	I0125 17:21:26.423465   29351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0125 17:21:26.460024   29351 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0125 17:21:26.460038   29351 docker.go:537] Images already preloaded, skipping extraction
	I0125 17:21:26.460134   29351 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0125 17:21:26.498895   29351 docker.go:606] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.23.2
	k8s.gcr.io/kube-scheduler:v1.23.2
	k8s.gcr.io/kube-controller-manager:v1.23.2
	k8s.gcr.io/kube-proxy:v1.23.2
	k8s.gcr.io/etcd:3.5.1-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	kubernetesui/dashboard:v2.3.1
	kubernetesui/metrics-scraper:v1.0.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0125 17:21:26.498920   29351 cache_images.go:84] Images are preloaded, skipping loading
	I0125 17:21:26.499018   29351 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0125 17:21:26.597954   29351 cni.go:93] Creating CNI manager for "kindnet"
	I0125 17:21:26.597984   29351 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0125 17:21:26.597999   29351 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220125165335-11219 NodeName:kindnet-20220125165335-11219 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/li
b/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0125 17:21:26.598126   29351 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kindnet-20220125165335-11219"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0125 17:21:26.598350   29351 kubeadm.go:791] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kindnet-20220125165335-11219 --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.2 ClusterName:kindnet-20220125165335-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0125 17:21:26.598422   29351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.2
	I0125 17:21:26.606773   29351 binaries.go:44] Found k8s binaries, skipping transfer
	I0125 17:21:26.606841   29351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0125 17:21:26.614825   29351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0125 17:21:26.627937   29351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0125 17:21:26.640951   29351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I0125 17:21:26.653652   29351 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0125 17:21:26.657718   29351 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0125 17:21:26.668691   29351 certs.go:54] Setting up /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219 for IP: 192.168.58.2
	I0125 17:21:26.668821   29351 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.key
	I0125 17:21:26.668878   29351 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.key
	I0125 17:21:26.668933   29351 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/client.key
	I0125 17:21:26.668945   29351 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/client.crt with IP's: []
	I0125 17:21:26.800418   29351 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/client.crt ...
	I0125 17:21:26.800438   29351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/client.crt: {Name:mk2e1bc5746c8c997abac8f200c5a68d8d8d0255 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:21:26.801440   29351 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/client.key ...
	I0125 17:21:26.801452   29351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/client.key: {Name:mkfe94f764a39b5e4b7823dc9f98a85b0f336daa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:21:26.801982   29351 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/apiserver.key.cee25041
	I0125 17:21:26.802006   29351 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0125 17:21:26.998081   29351 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/apiserver.crt.cee25041 ...
	I0125 17:21:26.998097   29351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/apiserver.crt.cee25041: {Name:mk42d73d7149dfb81400fd8191f4e2eca59d16a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:21:26.999325   29351 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/apiserver.key.cee25041 ...
	I0125 17:21:26.999335   29351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/apiserver.key.cee25041: {Name:mkae9365a8eccbf81bc887bfcdd73094a0ae0d1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:21:26.999765   29351 certs.go:320] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/apiserver.crt
	I0125 17:21:26.999946   29351 certs.go:324] copying /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/apiserver.key
	I0125 17:21:27.000108   29351 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/proxy-client.key
	I0125 17:21:27.000132   29351 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/proxy-client.crt with IP's: []
	I0125 17:21:27.099751   29351 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/proxy-client.crt ...
	I0125 17:21:27.099769   29351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/proxy-client.crt: {Name:mka9464755b646976ea1b991a99d50200cb7e785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:21:27.100152   29351 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/proxy-client.key ...
	I0125 17:21:27.100168   29351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/proxy-client.key: {Name:mk21a1cd8839353671f1610516827bc5514a154e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:21:27.101115   29351 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219.pem (1338 bytes)
	W0125 17:21:27.101196   29351 certs.go:384] ignoring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219_empty.pem, impossibly tiny 0 bytes
	I0125 17:21:27.101218   29351 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca-key.pem (1675 bytes)
	I0125 17:21:27.101282   29351 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/ca.pem (1082 bytes)
	I0125 17:21:27.101351   29351 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/cert.pem (1123 bytes)
	I0125 17:21:27.101415   29351 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/key.pem (1675 bytes)
	I0125 17:21:27.101535   29351 certs.go:388] found cert: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem (1708 bytes)
	I0125 17:21:27.103063   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0125 17:21:27.131155   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0125 17:21:27.148373   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0125 17:21:27.174480   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kindnet-20220125165335-11219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0125 17:21:27.192169   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0125 17:21:27.215708   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0125 17:21:27.235329   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0125 17:21:27.255052   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0125 17:21:27.278189   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/certs/11219.pem --> /usr/share/ca-certificates/11219.pem (1338 bytes)
	I0125 17:21:27.296347   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/ssl/certs/112192.pem --> /usr/share/ca-certificates/112192.pem (1708 bytes)
	I0125 17:21:27.319350   29351 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0125 17:21:27.338022   29351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0125 17:21:27.352764   29351 ssh_runner.go:195] Run: openssl version
	I0125 17:21:27.359831   29351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11219.pem && ln -fs /usr/share/ca-certificates/11219.pem /etc/ssl/certs/11219.pem"
	I0125 17:21:27.370453   29351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11219.pem
	I0125 17:21:27.374969   29351 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 26 00:05 /usr/share/ca-certificates/11219.pem
	I0125 17:21:27.375024   29351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11219.pem
	I0125 17:21:27.380745   29351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11219.pem /etc/ssl/certs/51391683.0"
	I0125 17:21:27.388920   29351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112192.pem && ln -fs /usr/share/ca-certificates/112192.pem /etc/ssl/certs/112192.pem"
	I0125 17:21:27.397323   29351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112192.pem
	I0125 17:21:27.401480   29351 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 26 00:05 /usr/share/ca-certificates/112192.pem
	I0125 17:21:27.401531   29351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112192.pem
	I0125 17:21:27.407290   29351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112192.pem /etc/ssl/certs/3ec20f2e.0"
	I0125 17:21:27.415550   29351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0125 17:21:27.423588   29351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0125 17:21:27.428099   29351 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 26 00:00 /usr/share/ca-certificates/minikubeCA.pem
	I0125 17:21:27.428153   29351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0125 17:21:27.433758   29351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0125 17:21:27.441437   29351 kubeadm.go:388] StartCluster: {Name:kindnet-20220125165335-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:kindnet-20220125165335-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false}
	I0125 17:21:27.441591   29351 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0125 17:21:27.473015   29351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0125 17:21:27.481158   29351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0125 17:21:27.488526   29351 kubeadm.go:218] ignoring SystemVerification for kubeadm because of docker driver
	I0125 17:21:27.488577   29351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0125 17:21:27.495935   29351 kubeadm.go:149] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0125 17:21:27.495957   29351 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0125 17:21:28.040184   29351 out.go:203]   - Generating certificates and keys ...
	I0125 17:21:31.144143   29351 out.go:203]   - Booting up control plane ...
	I0125 17:21:39.177486   29351 out.go:203]   - Configuring RBAC rules ...
	I0125 17:21:39.560946   29351 cni.go:93] Creating CNI manager for "kindnet"
	I0125 17:21:39.590073   29351 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0125 17:21:39.591171   29351 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0125 17:21:39.598003   29351 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.2/kubectl ...
	I0125 17:21:39.598016   29351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0125 17:21:39.633348   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0125 17:21:40.312174   29351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0125 17:21:40.312253   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl label nodes minikube.k8s.io/version=v1.25.1 minikube.k8s.io/commit=f2b90e74c34b616e7f63aca230995ce4db99c965 minikube.k8s.io/name=kindnet-20220125165335-11219 minikube.k8s.io/updated_at=2022_01_25T17_21_40_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:40.312258   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:40.332763   29351 ops.go:34] apiserver oom_adj: -16
	I0125 17:21:40.387126   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:40.972539   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:41.473792   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:41.976591   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:42.473605   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:42.973581   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:43.474222   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:43.976680   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:44.472315   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:44.972479   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:45.473045   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:45.977041   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:46.473331   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:46.976850   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:47.472821   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:47.977150   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:48.473202   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:48.972642   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:49.473128   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:49.977262   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:50.475881   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:50.974649   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:51.472979   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:51.977707   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:52.472405   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:52.972675   29351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0125 17:21:53.034111   29351 kubeadm.go:867] duration metric: took 12.721838808s to wait for elevateKubeSystemPrivileges.
	I0125 17:21:53.034150   29351 kubeadm.go:390] StartCluster complete in 25.592525468s
	I0125 17:21:53.034172   29351 settings.go:142] acquiring lock: {Name:mk4b38f66d2c1d7ad910ce332a6e0f9663533ce8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:21:53.034283   29351 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 17:21:53.035059   29351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig: {Name:mk22ac11166e634b93c7a48f1f20a682ee77d8e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 17:21:53.567335   29351 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220125165335-11219" rescaled to 1
	I0125 17:21:53.567386   29351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0125 17:21:53.567384   29351 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}
	I0125 17:21:53.567411   29351 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0125 17:21:53.567572   29351 config.go:176] Loaded profile config "kindnet-20220125165335-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 17:21:53.609117   29351 out.go:176] * Verifying Kubernetes components...
	I0125 17:21:53.609253   29351 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220125165335-11219"
	I0125 17:21:53.609254   29351 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220125165335-11219"
	I0125 17:21:53.609272   29351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220125165335-11219"
	I0125 17:21:53.609278   29351 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220125165335-11219"
	W0125 17:21:53.609285   29351 addons.go:165] addon storage-provisioner should already be in state true
	I0125 17:21:53.609328   29351 host.go:66] Checking if "kindnet-20220125165335-11219" exists ...
	I0125 17:21:53.609350   29351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0125 17:21:53.609824   29351 cli_runner.go:133] Run: docker container inspect kindnet-20220125165335-11219 --format={{.State.Status}}
	I0125 17:21:53.610469   29351 cli_runner.go:133] Run: docker container inspect kindnet-20220125165335-11219 --format={{.State.Status}}
	I0125 17:21:53.628429   29351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0125 17:21:53.640457   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:53.811407   29351 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0125 17:21:53.811596   29351 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0125 17:21:53.811608   29351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0125 17:21:53.811728   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:53.817590   29351 start.go:777] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS
	I0125 17:21:53.825094   29351 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220125165335-11219"
	W0125 17:21:53.825135   29351 addons.go:165] addon default-storageclass should already be in state true
	I0125 17:21:53.825165   29351 host.go:66] Checking if "kindnet-20220125165335-11219" exists ...
	I0125 17:21:53.825795   29351 cli_runner.go:133] Run: docker container inspect kindnet-20220125165335-11219 --format={{.State.Status}}
	I0125 17:21:53.832700   29351 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220125165335-11219" to be "Ready" ...
	I0125 17:21:53.960934   29351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59241 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/kindnet-20220125165335-11219/id_rsa Username:docker}
	I0125 17:21:53.972667   29351 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0125 17:21:53.972680   29351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0125 17:21:53.972789   29351 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220125165335-11219
	I0125 17:21:54.072806   29351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0125 17:21:54.110660   29351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59241 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/kindnet-20220125165335-11219/id_rsa Username:docker}
	I0125 17:21:54.232202   29351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0125 17:21:54.407926   29351 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0125 17:21:54.407941   29351 addons.go:417] enableAddons completed in 840.534557ms
	I0125 17:21:55.851464   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:21:57.852908   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:00.351477   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:02.352002   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:04.852071   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:06.853975   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:09.351516   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:11.351689   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:13.353246   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:15.852486   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:17.854248   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:20.352859   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:22.353281   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:24.852892   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:27.351891   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:29.356933   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:31.853222   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:34.351392   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:36.351889   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:38.352685   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:40.853804   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:43.352409   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:45.353081   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:47.354049   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:49.852307   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:52.351515   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:54.852827   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:56.853363   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:22:59.351866   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:01.352230   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:03.353008   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:05.851329   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:07.852794   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:09.853140   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:12.353032   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:14.853317   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:17.353057   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:19.853323   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:22.352705   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:24.853592   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:26.854556   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:29.352637   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:31.354177   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:33.851789   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:35.852949   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:37.854372   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:40.353401   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:42.853287   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:44.853787   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:46.853877   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:49.354411   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:51.853747   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:53.854083   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:56.353876   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:23:58.354201   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:00.355530   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:02.852407   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:04.854023   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:07.353050   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:09.353555   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:11.853840   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:13.856804   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:15.857010   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:18.354130   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:20.356712   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:22.854251   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:25.353707   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:27.355003   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:29.355938   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:31.853370   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:33.855287   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:36.354163   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:38.854757   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:41.355088   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:43.854068   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:46.354471   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:48.354785   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:50.853345   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:52.854447   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:55.354332   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:57.357316   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:24:59.854136   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:01.854675   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:04.356295   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:06.853840   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:09.353134   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:11.855451   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:14.352215   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:16.352946   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:18.853819   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:20.854283   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:22.855663   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:25.353370   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:27.854921   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:30.352335   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:32.352744   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:34.353029   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:36.355797   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:38.855712   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:41.354053   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:43.854846   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:46.354019   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:48.856192   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:51.357421   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:53.857408   29351 node_ready.go:58] node "kindnet-20220125165335-11219" has status "Ready":"False"
	I0125 17:25:53.859306   29351 node_ready.go:38] duration metric: took 4m0.022448147s waiting for node "kindnet-20220125165335-11219" to be "Ready" ...
	I0125 17:25:53.886361   29351 out.go:176] 
	W0125 17:25:53.886493   29351 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0125 17:25:53.886508   29351 out.go:241] * 
	* 
	W0125 17:25:53.887559   29351 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0125 17:25:53.959847   29351 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:101: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (310.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (293.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:22:44.648546   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135645727s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14778925s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:23:24.954488   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.127486586s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:23:42.022288   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.148085651s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0125 17:23:52.699112   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:24:04.976005   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.119609185s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131781907s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:24:43.666176   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:24:43.671373   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:24:43.681643   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:24:43.703373   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:24:43.749360   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:24:43.830599   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:24:43.999498   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:24:44.319680   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:24:44.964145   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:24:46.249353   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:24:48.816184   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147290605s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0125 17:24:53.942312   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:25:04.191548   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.149672383s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0125 17:25:24.677393   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:25:28.071807   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.15933186s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0125 17:25:51.719671   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.154908112s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0125 17:26:54.026468   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:27:27.576080   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context bridge-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.134521108s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:174: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (293.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (334.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:32:14.863351   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136872163s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:32:29.939624   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:32:29.944738   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:32:29.954812   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:32:29.975658   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:32:30.016578   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:32:30.098715   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:32:30.263235   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:32:30.587692   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:32:31.236565   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:32:32.516987   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:32:35.082595   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:32:38.539661   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:32:40.202917   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.159768733s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:32:44.652549   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 17:32:50.444912   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125322843s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:33:10.931717   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.154814704s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
E0125 17:33:24.962421   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: signal: killed (12.452476684s)

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (2.037µs)
E0125 17:33:42.037500   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.447µs)

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.999µs)
E0125 17:34:04.986825   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (896ns)
E0125 17:34:43.669904   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:34:48.071970   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.014µs)
E0125 17:35:13.823597   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.068µs)
E0125 17:35:38.045808   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:35:38.050909   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:35:38.062052   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:35:38.082940   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:35:38.123264   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:35:38.204033   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:35:38.366122   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:35:38.695600   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:35:39.345637   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:35:40.628001   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:35:43.196924   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:35:48.317324   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:35:51.729371   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:35:58.562415   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (974ns)

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/DNS
net_test.go:163: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:163: (dbg) Non-zero exit: kubectl --context kubenet-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default: context deadline exceeded (1.084µs)
net_test.go:169: failed to do nslookup on kubernetes.default: context deadline exceeded
net_test.go:174: failed nslookup: got="", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/kubenet/DNS (334.60s)
E0125 17:50:38.068968   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:50:51.760141   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory

                                                
                                    

Test pass (250/281)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.35
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.29
10 TestDownloadOnly/v1.23.2/json-events 2.86
14 TestDownloadOnly/v1.23.2/kubectl 0
15 TestDownloadOnly/v1.23.2/LogsDuration 0.28
17 TestDownloadOnly/v1.23.3-rc.0/json-events 4.91
18 TestDownloadOnly/v1.23.3-rc.0/preload-exists 0
21 TestDownloadOnly/v1.23.3-rc.0/kubectl 0
22 TestDownloadOnly/v1.23.3-rc.0/LogsDuration 0.28
23 TestDownloadOnly/DeleteAll 1.05
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.6
26 TestBinaryMirror 6.14
27 TestOffline 127.59
29 TestAddons/Setup 159.68
33 TestAddons/parallel/MetricsServer 5.89
34 TestAddons/parallel/HelmTiller 12.1
36 TestAddons/parallel/CSI 41.9
38 TestAddons/serial/GCPAuth 16.32
39 TestAddons/StoppedEnableDisable 18.02
40 TestCertOptions 69.11
41 TestCertExpiration 260.67
42 TestDockerFlags 61.48
43 TestForceSystemdFlag 83.05
44 TestForceSystemdEnv 83.53
46 TestHyperKitDriverInstallOrUpdate 7.24
49 TestErrorSpam/setup 72.12
50 TestErrorSpam/start 2.26
51 TestErrorSpam/status 1.9
52 TestErrorSpam/pause 2.12
53 TestErrorSpam/unpause 2.12
54 TestErrorSpam/stop 17.99
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 124.86
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 7.38
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 1.76
65 TestFunctional/serial/CacheCmd/cache/add_remote 5.54
66 TestFunctional/serial/CacheCmd/cache/add_local 2.15
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
68 TestFunctional/serial/CacheCmd/cache/list 0.07
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.7
70 TestFunctional/serial/CacheCmd/cache/cache_reload 3.23
71 TestFunctional/serial/CacheCmd/cache/delete 0.15
72 TestFunctional/serial/MinikubeKubectlCmd 0.52
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.58
74 TestFunctional/serial/ExtraConfig 62.21
75 TestFunctional/serial/ComponentHealth 0.11
76 TestFunctional/serial/LogsCmd 2.55
77 TestFunctional/serial/LogsFileCmd 2.32
79 TestFunctional/parallel/ConfigCmd 0.48
81 TestFunctional/parallel/DryRun 1.41
82 TestFunctional/parallel/InternationalLanguage 0.63
83 TestFunctional/parallel/StatusCmd 2
87 TestFunctional/parallel/AddonsCmd 0.28
88 TestFunctional/parallel/PersistentVolumeClaim 26.01
90 TestFunctional/parallel/SSHCmd 1.26
91 TestFunctional/parallel/CpCmd 2.51
92 TestFunctional/parallel/MySQL 23.52
93 TestFunctional/parallel/FileSync 0.72
94 TestFunctional/parallel/CertSync 4.21
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
102 TestFunctional/parallel/Version/short 0.1
103 TestFunctional/parallel/Version/components 1.16
104 TestFunctional/parallel/ImageCommands/ImageListShort 0.41
105 TestFunctional/parallel/ImageCommands/ImageListTable 0.42
106 TestFunctional/parallel/ImageCommands/ImageListJson 0.41
107 TestFunctional/parallel/ImageCommands/ImageListYaml 0.43
108 TestFunctional/parallel/ImageCommands/ImageBuild 3.53
109 TestFunctional/parallel/ImageCommands/Setup 2.48
110 TestFunctional/parallel/DockerEnv/bash 2.87
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.34
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.86
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.33
115 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.2
116 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.7
117 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.04
118 TestFunctional/parallel/ImageCommands/ImageRemove 1.05
119 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.69
120 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.01
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.83
122 TestFunctional/parallel/ProfileCmd/profile_list 0.73
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.85
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.29
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 3.83
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
134 TestFunctional/parallel/MountCmd/any-port 9.53
135 TestFunctional/parallel/MountCmd/specific-port 3.38
136 TestFunctional/delete_addon-resizer_images 0.24
137 TestFunctional/delete_my-image_image 0.11
138 TestFunctional/delete_minikube_cached_images 0.1
141 TestIngressAddonLegacy/StartLegacyK8sCluster 134.38
143 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.39
144 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.62
148 TestJSONOutput/start/Command 124.46
149 TestJSONOutput/start/Audit 0
151 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/pause/Command 0.8
155 TestJSONOutput/pause/Audit 0
157 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/unpause/Command 0.79
161 TestJSONOutput/unpause/Audit 0
163 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/stop/Command 17.04
167 TestJSONOutput/stop/Audit 0
169 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
171 TestErrorJSONOutput 0.73
173 TestKicCustomNetwork/create_custom_network 87.95
174 TestKicCustomNetwork/use_default_bridge_network 73.93
175 TestKicExistingNetwork 85.9
176 TestMainNoArgs 0.07
179 TestMountStart/serial/StartWithMountFirst 46.24
180 TestMountStart/serial/VerifyMountFirst 0.63
181 TestMountStart/serial/StartWithMountSecond 48.58
182 TestMountStart/serial/VerifyMountSecond 0.57
183 TestMountStart/serial/DeleteFirst 12.74
184 TestMountStart/serial/VerifyMountPostDelete 0.58
185 TestMountStart/serial/Stop 7.15
186 TestMountStart/serial/RestartStopped 29.54
187 TestMountStart/serial/VerifyMountPostStop 0.61
190 TestMultiNode/serial/FreshStart2Nodes 233.91
191 TestMultiNode/serial/DeployApp2Nodes 6.76
192 TestMultiNode/serial/PingHostFrom2Pods 0.85
193 TestMultiNode/serial/AddNode 110.2
194 TestMultiNode/serial/ProfileList 0.67
195 TestMultiNode/serial/CopyFile 21.77
196 TestMultiNode/serial/StopNode 10.54
197 TestMultiNode/serial/StartAfterStop 52.33
198 TestMultiNode/serial/RestartKeepsNodes 254.04
199 TestMultiNode/serial/DeleteNode 15.01
200 TestMultiNode/serial/StopMultiNode 24.18
201 TestMultiNode/serial/RestartMultiNode 152.66
202 TestMultiNode/serial/ValidateNameConflict 99.88
206 TestPreload 217.64
208 TestScheduledStopUnix 152.28
211 TestInsufficientStorage 62.83
214 TestKubernetesUpgrade 198.22
215 TestMissingContainerUpgrade 195.67
227 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.43
228 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.53
229 TestStoppedBinaryUpgrade/Setup 0.43
230 TestStoppedBinaryUpgrade/Upgrade 143.08
232 TestPause/serial/Start 108.4
233 TestPause/serial/SecondStartNoReconfiguration 7.19
234 TestPause/serial/Pause 0.82
235 TestPause/serial/VerifyStatus 0.62
236 TestPause/serial/Unpause 0.8
237 TestPause/serial/PauseAgain 0.89
238 TestPause/serial/DeletePaused 10.67
239 TestPause/serial/VerifyDeletedResources 1.02
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.34
249 TestNoKubernetes/serial/StartWithK8s 55.88
250 TestStoppedBinaryUpgrade/MinikubeLogs 2.74
251 TestNetworkPlugins/group/auto/Start 104.81
252 TestNoKubernetes/serial/StartWithStopK8s 28.14
253 TestNoKubernetes/serial/Start 39.39
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.81
255 TestNetworkPlugins/group/auto/KubeletFlags 0.72
256 TestNoKubernetes/serial/ProfileList 2.87
257 TestNetworkPlugins/group/auto/NetCatPod 13
258 TestNoKubernetes/serial/Stop 4.72
259 TestNoKubernetes/serial/StartNoArgs 20.89
260 TestNetworkPlugins/group/auto/DNS 0.15
261 TestNetworkPlugins/group/auto/Localhost 0.14
262 TestNetworkPlugins/group/auto/HairPin 5.14
263 TestNetworkPlugins/group/false/Start 104.95
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 1.47
265 TestNetworkPlugins/group/cilium/Start 110.39
266 TestNetworkPlugins/group/false/KubeletFlags 0.72
267 TestNetworkPlugins/group/false/NetCatPod 11.98
268 TestNetworkPlugins/group/false/DNS 0.14
269 TestNetworkPlugins/group/false/Localhost 0.14
270 TestNetworkPlugins/group/false/HairPin 5.14
271 TestNetworkPlugins/group/cilium/ControllerPod 5.02
273 TestNetworkPlugins/group/cilium/KubeletFlags 0.61
274 TestNetworkPlugins/group/cilium/NetCatPod 14.45
275 TestNetworkPlugins/group/cilium/DNS 0.15
276 TestNetworkPlugins/group/cilium/Localhost 0.13
277 TestNetworkPlugins/group/cilium/HairPin 0.13
278 TestNetworkPlugins/group/custom-weave/Start 100.08
279 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.67
280 TestNetworkPlugins/group/custom-weave/NetCatPod 13.96
281 TestNetworkPlugins/group/enable-default-cni/Start 57.05
282 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.64
283 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.06
286 TestNetworkPlugins/group/bridge/Start 74.69
287 TestNetworkPlugins/group/bridge/KubeletFlags 0.61
288 TestNetworkPlugins/group/bridge/NetCatPod 15.1
290 TestNetworkPlugins/group/kubenet/Start 344.95
292 TestStartStop/group/old-k8s-version/serial/FirstStart 165.15
293 TestStartStop/group/old-k8s-version/serial/DeployApp 11.23
294 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.73
295 TestStartStop/group/old-k8s-version/serial/Stop 18.97
296 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.37
297 TestStartStop/group/old-k8s-version/serial/SecondStart 150.54
298 TestNetworkPlugins/group/kubenet/KubeletFlags 0.66
299 TestNetworkPlugins/group/kubenet/NetCatPod 13.82
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 7.17
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.63
304 TestStartStop/group/old-k8s-version/serial/Pause 4.31
306 TestStartStop/group/no-preload/serial/FirstStart 114.73
307 TestStartStop/group/no-preload/serial/DeployApp 10.05
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.79
309 TestStartStop/group/no-preload/serial/Stop 19.09
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.38
311 TestStartStop/group/no-preload/serial/SecondStart 75.93
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.03
314 TestStartStop/group/embed-certs/serial/FirstStart 327.15
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.95
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.68
317 TestStartStop/group/no-preload/serial/Pause 5.42
319 TestStartStop/group/default-k8s-different-port/serial/FirstStart 316.45
320 TestStartStop/group/embed-certs/serial/DeployApp 10.05
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.84
322 TestStartStop/group/embed-certs/serial/Stop 19.45
323 TestStartStop/group/default-k8s-different-port/serial/DeployApp 10.11
324 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.86
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.56
326 TestStartStop/group/default-k8s-different-port/serial/Stop 13.24
327 TestStartStop/group/embed-certs/serial/SecondStart 302.39
328 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.38
329 TestStartStop/group/default-k8s-different-port/serial/SecondStart 308.52
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 7.01
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.71
333 TestStartStop/group/embed-certs/serial/Pause 4.47
334 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.02
335 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 6.92
336 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.65
337 TestStartStop/group/default-k8s-different-port/serial/Pause 4.91
339 TestStartStop/group/newest-cni/serial/FirstStart 89.9
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
342 TestStartStop/group/newest-cni/serial/Stop 17.34
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.38
344 TestStartStop/group/newest-cni/serial/SecondStart 66.29
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.64
348 TestStartStop/group/newest-cni/serial/Pause 4.53
x
+
TestDownloadOnly/v1.16.0/json-events (12.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220125155829-11219 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:73: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220125155829-11219 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (12.349095699s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220125155829-11219
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220125155829-11219: exit status 85 (282.477886ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/25 15:58:29
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0125 15:58:29.416222   11236 out.go:297] Setting OutFile to fd 1 ...
	I0125 15:58:29.416370   11236 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 15:58:29.416374   11236 out.go:310] Setting ErrFile to fd 2...
	I0125 15:58:29.416377   11236 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 15:58:29.416445   11236 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/bin
	W0125 15:58:29.416577   11236 root.go:293] Error reading config file at /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/config/config.json: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/config/config.json: no such file or directory
	I0125 15:58:29.417059   11236 out.go:304] Setting JSON to true
	I0125 15:58:29.442834   11236 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5284,"bootTime":1643149825,"procs":316,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0125 15:58:29.442935   11236 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0125 15:58:29.469873   11236 notify.go:174] Checking for updates...
	W0125 15:58:29.469874   11236 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/preloaded-tarball: no such file or directory
	I0125 15:58:29.496892   11236 driver.go:344] Setting default libvirt URI to qemu:///system
	W0125 15:58:29.576792   11236 docker.go:108] docker version returned error: exit status 1
	I0125 15:58:29.618507   11236 start.go:280] selected driver: docker
	I0125 15:58:29.618535   11236 start.go:795] validating driver "docker" against <nil>
	I0125 15:58:29.618715   11236 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 15:58:29.778928   11236 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 15:58:29.846793   11236 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 15:58:29.994000   11236 info.go:263] docker info: {ID: Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver: DriverStatus:[] SystemStatus:<nil> Plugins:{Volume:[] Network:[] Authorization:<nil> Log:[]} MemoryLimit:false SwapLimit:false KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:false CPUCfsQuota:false CPUShares:false CPUSet:false PidsLimit:false IPv4Forwarding:false BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:0 OomKillDisable:false NGoroutines:0 SystemTime:0001-01-01 00:00:00 +0000 UTC LoggingDriver: CgroupDriver: NEventsListener:0 KernelVersion: OperatingSystem: OSType: Architecture: IndexServerAddress: RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[] IndexConfigs:{DockerIo:{Name: Mirrors:[] Secure:false Official:false}} Mirrors:[]} NCPU:0 MemTotal:0 GenericResources:<nil> DockerRootDir: HTTPProxy: HTTPSProxy: NoProxy: Name: Labels:[] ExperimentalBuild:fals
e ServerVersion: ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:}} DefaultRuntime: Swarm:{NodeID: NodeAddr: LocalNodeState: ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary: ContainerdCommit:{ID: Expected:} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[] ProductLicense: Warnings:<nil> ServerErrors:[Error response from daemon: dial unix docker.raw.sock: connect: connection refused] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 15:58:30.019537   11236 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
	I0125 15:58:30.073569   11236 start_flags.go:369] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0125 15:58:30.073683   11236 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
	I0125 15:58:30.073696   11236 start_flags.go:810] Wait components to verify : map[apiserver:true system_pods:true]
	I0125 15:58:30.073747   11236 cni.go:93] Creating CNI manager for ""
	I0125 15:58:30.073770   11236 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0125 15:58:30.073784   11236 start_flags.go:302] config:
	{Name:download-only-20220125155829-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220125155829-11219 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0125 15:58:30.099422   11236 cache.go:120] Beginning downloading kic base image for docker with docker
	I0125 15:58:30.125673   11236 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local docker daemon
	I0125 15:58:30.125680   11236 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0125 15:58:30.127171   11236 cache.go:107] acquiring lock: {Name:mk2396fe6fe12a3121f47a9aad010b4d2ba03444 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 15:58:30.127200   11236 cache.go:107] acquiring lock: {Name:mk453979a91ca5afe4b0109f4d1b0a921a84a2a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 15:58:30.128651   11236 cache.go:107] acquiring lock: {Name:mk76eb6a6505ed0f7823a1fd5777c2d689059264 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 15:58:30.128651   11236 cache.go:107] acquiring lock: {Name:mk85145b31fe420e1c1402f3490ca08f4ab47486 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 15:58:30.128652   11236 cache.go:107] acquiring lock: {Name:mk1bfa0776f035a7c8f633b9129bcf426d80a2a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 15:58:30.128684   11236 cache.go:107] acquiring lock: {Name:mk1d1642c430fef7f8f51b584326999502b47176 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 15:58:30.128715   11236 cache.go:107] acquiring lock: {Name:mk9f05a1cfccb3de4882045f7d459133011a5e38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 15:58:30.128722   11236 cache.go:107] acquiring lock: {Name:mke0823868df4f488de79d50bb169f4c76bf791e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 15:58:30.128756   11236 cache.go:107] acquiring lock: {Name:mk53262cb1be69a125b9a3375064dcdc142d45f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 15:58:30.128744   11236 cache.go:107] acquiring lock: {Name:mk38b4f35224aa3dad49bcef58bab3748239e67b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0125 15:58:30.128976   11236 profile.go:147] Saving config to /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/download-only-20220125155829-11219/config.json ...
	I0125 15:58:30.129107   11236 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/download-only-20220125155829-11219/config.json: {Name:mk33a6adcb39484a0d090db1e5e97e62c5dd4526 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0125 15:58:30.129250   11236 image.go:134] retrieving image: k8s.gcr.io/kube-apiserver:v1.16.0
	I0125 15:58:30.129294   11236 image.go:134] retrieving image: k8s.gcr.io/coredns:1.6.2
	I0125 15:58:30.129318   11236 image.go:134] retrieving image: k8s.gcr.io/pause:3.1
	I0125 15:58:30.129370   11236 image.go:134] retrieving image: k8s.gcr.io/kube-scheduler:v1.16.0
	I0125 15:58:30.129370   11236 image.go:134] retrieving image: docker.io/kubernetesui/dashboard:v2.3.1
	I0125 15:58:30.129408   11236 image.go:134] retrieving image: k8s.gcr.io/kube-proxy:v1.16.0
	I0125 15:58:30.129411   11236 image.go:134] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.7
	I0125 15:58:30.129452   11236 image.go:134] retrieving image: k8s.gcr.io/kube-controller-manager:v1.16.0
	I0125 15:58:30.129516   11236 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0125 15:58:30.129544   11236 image.go:134] retrieving image: k8s.gcr.io/etcd:3.3.15-0
	I0125 15:58:30.129849   11236 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0125 15:58:30.130234   11236 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/linux/v1.16.0/kubectl
	I0125 15:58:30.130235   11236 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubeadm.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/linux/v1.16.0/kubeadm
	I0125 15:58:30.130234   11236 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubelet.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/linux/v1.16.0/kubelet
	I0125 15:58:30.132142   11236 image.go:180] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0125 15:58:30.132151   11236 image.go:180] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0125 15:58:30.132435   11236 image.go:180] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0125 15:58:30.132470   11236 image.go:180] daemon lookup for k8s.gcr.io/kube-proxy:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0125 15:58:30.133637   11236 image.go:180] daemon lookup for docker.io/kubernetesui/dashboard:v2.3.1: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0125 15:58:30.133640   11236 image.go:180] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0125 15:58:30.134172   11236 image.go:180] daemon lookup for k8s.gcr.io/etcd:3.3.15-0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0125 15:58:30.134248   11236 image.go:180] daemon lookup for k8s.gcr.io/kube-apiserver:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0125 15:58:30.134590   11236 image.go:180] daemon lookup for k8s.gcr.io/coredns:1.6.2: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0125 15:58:30.134903   11236 image.go:180] daemon lookup for k8s.gcr.io/kube-scheduler:v1.16.0: Error response from daemon: dial unix docker.raw.sock: connect: connection refused
	I0125 15:58:30.238795   11236 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b to local cache
	I0125 15:58:30.238980   11236 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local cache directory
	I0125 15:58:30.239065   11236 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b to local cache
	I0125 15:58:30.769537   11236 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/pause_3.1
	I0125 15:58:30.814857   11236 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0
	W0125 15:58:30.816601   11236 image.go:190] authn lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7 (trying anon): GET https://index.docker.io/v2/kubernetesui/metrics-scraper/manifests/v1.0.7: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	I0125 15:58:30.818624   11236 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0
	W0125 15:58:30.870772   11236 image.go:190] authn lookup for docker.io/kubernetesui/dashboard:v2.3.1 (trying anon): GET https://index.docker.io/v2/kubernetesui/dashboard/manifests/v2.3.1: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	I0125 15:58:30.882829   11236 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0125 15:58:30.882829   11236 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0
	I0125 15:58:30.883047   11236 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
	I0125 15:58:30.883063   11236 cache.go:96] cache image "k8s.gcr.io/pause:3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 755.390833ms
	I0125 15:58:30.883075   11236 cache.go:80] save to tar file k8s.gcr.io/pause:3.1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
	I0125 15:58:30.927201   11236 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2
	I0125 15:58:30.929189   11236 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0
	I0125 15:58:30.949955   11236 cache.go:161] opening:  /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0
	I0125 15:58:31.204441   11236 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0125 15:58:31.204460   11236 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.077134813s
	I0125 15:58:31.204476   11236 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0125 15:58:31.220989   11236 image.go:194] remote lookup for docker.io/kubernetesui/metrics-scraper:v1.0.7: GET https://index.docker.io/v2/kubernetesui/metrics-scraper/manifests/v1.0.7: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	I0125 15:58:31.221028   11236 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 1.094141776s
	I0125 15:58:31.275232   11236 image.go:194] remote lookup for docker.io/kubernetesui/dashboard:v2.3.1: GET https://index.docker.io/v2/kubernetesui/dashboard/manifests/v2.3.1: TOOMANYREQUESTS: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	I0125 15:58:31.275270   11236 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 1.147799352s
	I0125 15:58:31.349844   11236 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/darwin/v1.16.0/kubectl
	I0125 15:58:31.604002   11236 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 exists
	I0125 15:58:31.604021   11236 cache.go:96] cache image "k8s.gcr.io/coredns:1.6.2" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2" took 1.476651039s
	I0125 15:58:31.604035   11236 cache.go:80] save to tar file k8s.gcr.io/coredns:1.6.2 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/coredns_1.6.2 succeeded
	I0125 15:58:31.980971   11236 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 exists
	I0125 15:58:31.980982   11236 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 exists
	I0125 15:58:31.980996   11236 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0" took 1.85334236s
	I0125 15:58:31.980995   11236 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0" took 1.853684604s
	I0125 15:58:31.981005   11236 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.16.0 succeeded
	I0125 15:58:31.981010   11236 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.16.0 succeeded
	I0125 15:58:32.065713   11236 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 exists
	I0125 15:58:32.065745   11236 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0" took 1.937111658s
	I0125 15:58:32.065781   11236 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.16.0 succeeded
	I0125 15:58:32.276972   11236 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 exists
	I0125 15:58:32.276989   11236 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.16.0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0" took 2.150906292s
	I0125 15:58:32.276998   11236 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.16.0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.16.0 succeeded
	I0125 15:58:32.748966   11236 cache.go:156] /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 exists
	I0125 15:58:32.748985   11236 cache.go:96] cache image "k8s.gcr.io/etcd:3.3.15-0" -> "/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0" took 2.620997571s
	I0125 15:58:32.748993   11236 cache.go:80] save to tar file k8s.gcr.io/etcd:3.3.15-0 -> /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/cache/images/k8s.gcr.io/etcd_3.3.15-0 succeeded
	I0125 15:58:32.749000   11236 cache.go:87] Successfully saved all images to host disk.
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220125155829-11219"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/json-events (2.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220125155829-11219 --force --alsologtostderr --kubernetes-version=v1.23.2 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:73: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220125155829-11219 --force --alsologtostderr --kubernetes-version=v1.23.2 --container-runtime=docker --driver=docker : (2.864291503s)
--- PASS: TestDownloadOnly/v1.23.2/json-events (2.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/kubectl
--- PASS: TestDownloadOnly/v1.23.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220125155829-11219
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220125155829-11219: exit status 85 (280.835067ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/25 15:58:42
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220125155829-11219"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.2/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/json-events (4.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/json-events
aaa_download_only_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220125155829-11219 --force --alsologtostderr --kubernetes-version=v1.23.3-rc.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:73: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-20220125155829-11219 --force --alsologtostderr --kubernetes-version=v1.23.3-rc.0 --container-runtime=docker --driver=docker : (4.913414258s)
--- PASS: TestDownloadOnly/v1.23.3-rc.0/json-events (4.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.3-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.23.3-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/LogsDuration
aaa_download_only_test.go:175: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-20220125155829-11219
aaa_download_only_test.go:175: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-20220125155829-11219: exit status 85 (277.147982ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/01/25 15:58:57
	Running on machine: administrators-Mac-mini
	Binary: Built with gc go1.17.6 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220125155829-11219"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:176: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.3-rc.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.05s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:193: (dbg) Run:  out/minikube-darwin-amd64 delete --all
aaa_download_only_test.go:193: (dbg) Done: out/minikube-darwin-amd64 delete --all: (1.047818708s)
--- PASS: TestDownloadOnly/DeleteAll (1.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.6s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-20220125155829-11219
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.60s)

                                                
                                    
x
+
TestBinaryMirror (6.14s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:316: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220125155908-11219 --alsologtostderr --binary-mirror http://127.0.0.1:59455 --driver=docker 
aaa_download_only_test.go:316: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-20220125155908-11219 --alsologtostderr --binary-mirror http://127.0.0.1:59455 --driver=docker : (5.286310785s)
helpers_test.go:176: Cleaning up "binary-mirror-20220125155908-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-20220125155908-11219
--- PASS: TestBinaryMirror (6.14s)

                                                
                                    
x
+
TestOffline (127.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-20220125165334-11219 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:56: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-20220125165334-11219 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (1m50.54724098s)
helpers_test.go:176: Cleaning up "offline-docker-20220125165334-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-20220125165334-11219
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-20220125165334-11219: (17.046823745s)
--- PASS: TestOffline (127.59s)

                                                
                                    
x
+
TestAddons/Setup (159.68s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-20220125155914-11219 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-20220125155914-11219 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=olm --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m39.676073519s)
--- PASS: TestAddons/Setup (159.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:358: metrics-server stabilized in 2.507074ms
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:343: "metrics-server-6b76bd68b6-q7lzp" [024cf6e4-2027-47c5-afd4-7f61412bb88e] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.018052499s
addons_test.go:366: (dbg) Run:  kubectl --context addons-20220125155914-11219 top pods -n kube-system
addons_test.go:383: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220125155914-11219 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.89s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.1s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:407: tiller-deploy stabilized in 16.670855ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:343: "tiller-deploy-6d67d5465d-gcwkb" [14bf08d8-c45e-4516-81b9-93aa937c287e] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015308084s
addons_test.go:424: (dbg) Run:  kubectl --context addons-20220125155914-11219 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:424: (dbg) Done: kubectl --context addons-20220125155914-11219 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.416237049s)
addons_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220125155914-11219 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.10s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:512: csi-hostpath-driver pods stabilized in 5.923937ms
addons_test.go:515: (dbg) Run:  kubectl --context addons-20220125155914-11219 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:520: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220125155914-11219 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:525: (dbg) Run:  kubectl --context addons-20220125155914-11219 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:530: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [fe50bb0d-d4c9-40f0-9b47-805e12680233] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [fe50bb0d-d4c9-40f0-9b47-805e12680233] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [fe50bb0d-d4c9-40f0-9b47-805e12680233] Running
addons_test.go:530: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.014903385s
addons_test.go:535: (dbg) Run:  kubectl --context addons-20220125155914-11219 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:540: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220125155914-11219 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20220125155914-11219 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:545: (dbg) Run:  kubectl --context addons-20220125155914-11219 delete pod task-pv-pod
addons_test.go:551: (dbg) Run:  kubectl --context addons-20220125155914-11219 delete pvc hpvc
addons_test.go:557: (dbg) Run:  kubectl --context addons-20220125155914-11219 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:562: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20220125155914-11219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:567: (dbg) Run:  kubectl --context addons-20220125155914-11219 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:572: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [153c2e27-ccbf-4e44-b902-d185080dd9ff] Pending
helpers_test.go:343: "task-pv-pod-restore" [153c2e27-ccbf-4e44-b902-d185080dd9ff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:343: "task-pv-pod-restore" [153c2e27-ccbf-4e44-b902-d185080dd9ff] Running
addons_test.go:572: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.010277627s
addons_test.go:577: (dbg) Run:  kubectl --context addons-20220125155914-11219 delete pod task-pv-pod-restore
addons_test.go:581: (dbg) Run:  kubectl --context addons-20220125155914-11219 delete pvc hpvc-restore
addons_test.go:585: (dbg) Run:  kubectl --context addons-20220125155914-11219 delete volumesnapshot new-snapshot-demo
addons_test.go:589: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220125155914-11219 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:589: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220125155914-11219 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.986265259s)
addons_test.go:593: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220125155914-11219 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (16.32s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:604: (dbg) Run:  kubectl --context addons-20220125155914-11219 create -f testdata/busybox.yaml
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [5fba2d3d-3c9b-4681-95e8-3feee54ddc54] Pending
helpers_test.go:343: "busybox" [5fba2d3d-3c9b-4681-95e8-3feee54ddc54] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [5fba2d3d-3c9b-4681-95e8-3feee54ddc54] Running
addons_test.go:610: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.012991964s
addons_test.go:616: (dbg) Run:  kubectl --context addons-20220125155914-11219 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:629: (dbg) Run:  kubectl --context addons-20220125155914-11219 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:653: (dbg) Run:  kubectl --context addons-20220125155914-11219 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:666: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220125155914-11219 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:666: (dbg) Done: out/minikube-darwin-amd64 -p addons-20220125155914-11219 addons disable gcp-auth --alsologtostderr -v=1: (6.677298928s)
--- PASS: TestAddons/serial/GCPAuth (16.32s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.02s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-20220125155914-11219
addons_test.go:133: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-20220125155914-11219: (17.587637756s)
addons_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-20220125155914-11219
addons_test.go:141: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-20220125155914-11219
--- PASS: TestAddons/StoppedEnableDisable (18.02s)

                                                
                                    
x
+
TestCertOptions (69.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-20220125165646-11219 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0125 16:56:53.980148   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:50: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-20220125165646-11219 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (55.728450556s)
cert_options_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-20220125165646-11219 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-20220125165646-11219 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-20220125165646-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-20220125165646-11219
E0125 16:57:44.619125   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-20220125165646-11219: (12.0350912s)
--- PASS: TestCertOptions (69.11s)

                                                
                                    
x
+
TestCertExpiration (260.67s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220125165643-11219 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:124: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220125165643-11219 --memory=2048 --cert-expiration=3m --driver=docker : (57.787227907s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-20220125165643-11219 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-20220125165643-11219 --memory=2048 --cert-expiration=8760h --driver=docker : (6.652129622s)
helpers_test.go:176: Cleaning up "cert-expiration-20220125165643-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-20220125165643-11219

                                                
                                                
=== CONT  TestCertExpiration
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-20220125165643-11219: (16.228995044s)
--- PASS: TestCertExpiration (260.67s)

                                                
                                    
x
+
TestDockerFlags (61.48s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:46: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-20220125165541-11219 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
E0125 16:55:47.727335   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
docker_test.go:46: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-20220125165541-11219 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (45.548249829s)
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220125165541-11219 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-20220125165541-11219 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-20220125165541-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-20220125165541-11219

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-20220125165541-11219: (14.449718514s)
--- PASS: TestDockerFlags (61.48s)

                                                
                                    
x
+
TestForceSystemdFlag (83.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-20220125165523-11219 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-20220125165523-11219 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (1m6.118611096s)
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-20220125165523-11219 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-20220125165523-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-20220125165523-11219
E0125 16:56:37.093940   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-20220125165523-11219: (16.23738938s)
--- PASS: TestForceSystemdFlag (83.05s)

                                                
                                    
x
+
TestForceSystemdEnv (83.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-20220125165400-11219 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E0125 16:54:04.888712   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
docker_test.go:151: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-20220125165400-11219 --memory=2048 --alsologtostderr -v=5 --driver=docker : (1m7.373614905s)
docker_test.go:105: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-20220125165400-11219 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-20220125165400-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-20220125165400-11219
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-20220125165400-11219: (15.474311392s)
--- PASS: TestForceSystemdEnv (83.53s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.24s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.24s)

                                                
                                    
x
+
TestErrorSpam/setup (72.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-20220125160335-11219 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 --driver=docker 
error_spam_test.go:79: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-20220125160335-11219 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 --driver=docker : (1m12.124511442s)
error_spam_test.go:89: acceptable stderr: "! /usr/local/bin/kubectl is version 1.19.7, which may have incompatibilites with Kubernetes 1.23.2."
--- PASS: TestErrorSpam/setup (72.12s)

                                                
                                    
x
+
TestErrorSpam/start (2.26s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:214: Cleaning up 1 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 start --dry-run
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 start --dry-run
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 start --dry-run
--- PASS: TestErrorSpam/start (2.26s)

                                                
                                    
x
+
TestErrorSpam/status (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 status
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 status
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 status
--- PASS: TestErrorSpam/status (1.90s)

                                                
                                    
x
+
TestErrorSpam/pause (2.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 pause
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 pause
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 pause
--- PASS: TestErrorSpam/pause (2.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 unpause
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 unpause
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 unpause
--- PASS: TestErrorSpam/unpause (2.12s)

                                                
                                    
x
+
TestErrorSpam/stop (17.99s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:214: Cleaning up 0 logfile(s) ...
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 stop
error_spam_test.go:157: (dbg) Done: out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 stop: (17.283715662s)
error_spam_test.go:157: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 stop
error_spam_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-20220125160335-11219 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-20220125160335-11219 stop
--- PASS: TestErrorSpam/stop (17.99s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1707: local sync path: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/files/etc/test/nested/copy/11219/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (124.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2089: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220125160520-11219 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
E0125 16:06:53.935892   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:06:53.942003   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:06:53.955908   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:06:53.977501   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:06:54.022674   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:06:54.109687   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:06:54.271561   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:06:54.597278   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:06:55.247272   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:06:56.528222   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:06:59.097216   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:07:04.221113   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:07:14.468244   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
functional_test.go:2089: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220125160520-11219 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (2m4.855249581s)
--- PASS: TestFunctional/serial/StartWithProxy (124.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220125160520-11219 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220125160520-11219 --alsologtostderr -v=8: (7.383501789s)
functional_test.go:659: soft start took 7.384015704s for "functional-20220125160520-11219" cluster.
--- PASS: TestFunctional/serial/SoftStart (7.38s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-20220125160520-11219 get po -A
functional_test.go:692: (dbg) Done: kubectl --context functional-20220125160520-11219 get po -A: (1.764506961s)
--- PASS: TestFunctional/serial/KubectlGetPods (1.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 cache add k8s.gcr.io/pause:3.1
E0125 16:07:34.953406   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 cache add k8s.gcr.io/pause:3.1: (1.504399331s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 cache add k8s.gcr.io/pause:3.3: (2.05393937s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 cache add k8s.gcr.io/pause:latest: (1.985815597s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220125160520-11219 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/functional-20220125160520-112193150124461
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 cache add minikube-local-cache-test:functional-20220125160520-11219
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 cache add minikube-local-cache-test:functional-20220125160520-11219: (1.51410836s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 cache delete minikube-local-cache-test:functional-20220125160520-11219
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220125160520-11219
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (595.917718ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 cache reload: (1.369252844s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 kubectl -- --context functional-20220125160520-11219 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.58s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-20220125160520-11219 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.58s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (62.21s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220125160520-11219 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0125 16:08:15.914853   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-20220125160520-11219 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m2.210627304s)
functional_test.go:757: restart took 1m2.210752717s for "functional-20220125160520-11219" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (62.21s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:811: (dbg) Run:  kubectl --context functional-20220125160520-11219 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:826: etcd phase: Running
functional_test.go:836: etcd status: Ready
functional_test.go:826: kube-apiserver phase: Running
functional_test.go:836: kube-apiserver status: Ready
functional_test.go:826: kube-controller-manager phase: Running
functional_test.go:836: kube-controller-manager status: Ready
functional_test.go:826: kube-scheduler phase: Running
functional_test.go:836: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 logs: (2.5450923s)
--- PASS: TestFunctional/serial/LogsCmd (2.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1249: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/functional-20220125160520-112194099782669/logs.txt
functional_test.go:1249: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/functional-20220125160520-112194099782669/logs.txt: (2.317712138s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220125160520-11219 config get cpus: exit status 14 (48.581966ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220125160520-11219 config get cpus: exit status 14 (50.225966ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:971: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220125160520-11219 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:971: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220125160520-11219 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (642.077553ms)

                                                
                                                
-- stdout --
	* [functional-20220125160520-11219] minikube v1.25.1 on Darwin 11.1
	  - MINIKUBE_LOCATION=13326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0125 16:09:56.218366   14077 out.go:297] Setting OutFile to fd 1 ...
	I0125 16:09:56.218489   14077 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:09:56.218494   14077 out.go:310] Setting ErrFile to fd 2...
	I0125 16:09:56.218497   14077 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:09:56.218570   14077 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/bin
	I0125 16:09:56.218821   14077 out.go:304] Setting JSON to false
	I0125 16:09:56.242093   14077 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5971,"bootTime":1643149825,"procs":312,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0125 16:09:56.242190   14077 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0125 16:09:56.286407   14077 out.go:176] * [functional-20220125160520-11219] minikube v1.25.1 on Darwin 11.1
	I0125 16:09:56.334968   14077 out.go:176]   - MINIKUBE_LOCATION=13326
	I0125 16:09:56.365890   14077 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 16:09:56.393684   14077 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0125 16:09:56.420190   14077 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0125 16:09:56.445947   14077 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	I0125 16:09:56.446733   14077 config.go:176] Loaded profile config "functional-20220125160520-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 16:09:56.447372   14077 driver.go:344] Setting default libvirt URI to qemu:///system
	I0125 16:09:56.554075   14077 docker.go:132] docker version: linux-20.10.5
	I0125 16:09:56.554220   14077 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 16:09:56.713427   14077 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-26 00:09:56.669065466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 16:09:56.740662   14077 out.go:176] * Using the docker driver based on existing profile
	I0125 16:09:56.740698   14077 start.go:280] selected driver: docker
	I0125 16:09:56.740710   14077 start.go:795] validating driver "docker" against &{Name:functional-20220125160520-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220125160520-11219 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0125 16:09:56.740896   14077 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0125 16:09:56.770282   14077 out.go:176] 
	W0125 16:09:56.770489   14077 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0125 16:09:56.818177   14077 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:988: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220125160520-11219 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-20220125160520-11219 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-20220125160520-11219 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (634.523194ms)

                                                
                                                
-- stdout --
	* [functional-20220125160520-11219] minikube v1.25.1 sur Darwin 11.1
	  - MINIKUBE_LOCATION=13326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0125 16:09:50.912022   13915 out.go:297] Setting OutFile to fd 1 ...
	I0125 16:09:50.912218   13915 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:09:50.912223   13915 out.go:310] Setting ErrFile to fd 2...
	I0125 16:09:50.912227   13915 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:09:50.912354   13915 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/bin
	I0125 16:09:50.912635   13915 out.go:304] Setting JSON to false
	I0125 16:09:50.936118   13915 start.go:112] hostinfo: {"hostname":"administrators-Mac-mini.local","uptime":5965,"bootTime":1643149825,"procs":309,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.1","kernelVersion":"20.2.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0125 16:09:50.936227   13915 start.go:120] gopshost.Virtualization returned error: not implemented yet
	I0125 16:09:50.963262   13915 out.go:176] * [functional-20220125160520-11219] minikube v1.25.1 sur Darwin 11.1
	I0125 16:09:51.014938   13915 out.go:176]   - MINIKUBE_LOCATION=13326
	I0125 16:09:51.040749   13915 out.go:176]   - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	I0125 16:09:51.066889   13915 out.go:176]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0125 16:09:51.092888   13915 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0125 16:09:51.118682   13915 out.go:176]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	I0125 16:09:51.119511   13915 config.go:176] Loaded profile config "functional-20220125160520-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 16:09:51.120074   13915 driver.go:344] Setting default libvirt URI to qemu:///system
	I0125 16:09:51.213819   13915 docker.go:132] docker version: linux-20.10.5
	I0125 16:09:51.213948   13915 cli_runner.go:133] Run: docker system info --format "{{json .}}"
	I0125 16:09:51.372232   13915 info.go:263] docker info: {ID:HC2B:ZT4J:7LQQ:KUDL:VK6I:VI3L:CZSU:73C6:GUST:UZES:WKZP:VUS2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:51 SystemTime:2022-01-26 00:09:51.322166481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.25-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://ind
ex.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6234726400 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/local/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.6.0]] Warnings:<nil>}}
	I0125 16:09:51.419706   13915 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0125 16:09:51.419734   13915 start.go:280] selected driver: docker
	I0125 16:09:51.419743   13915 start.go:795] validating driver "docker" against &{Name:functional-20220125160520-11219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.2 ClusterName:functional-20220125160520-11219 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alias
es:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
	I0125 16:09:51.419869   13915 start.go:806] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0125 16:09:51.447441   13915 out.go:176] 
	W0125 16:09:51.447564   13915 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0125 16:09:51.495692   13915 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:855: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:861: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:873: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1541: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 addons list
functional_test.go:1553: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [70343b09-0aa7-4725-89da-48b775d6596e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009588995s
functional_test_pvc_test.go:50: (dbg) Run:  kubectl --context functional-20220125160520-11219 get storageclass -o=json
functional_test_pvc_test.go:70: (dbg) Run:  kubectl --context functional-20220125160520-11219 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:77: (dbg) Run:  kubectl --context functional-20220125160520-11219 get pvc myclaim -o=json
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220125160520-11219 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [cdf2175f-6016-4f5e-bcb8-77410729c10a] Pending
helpers_test.go:343: "sp-pod" [cdf2175f-6016-4f5e-bcb8-77410729c10a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [cdf2175f-6016-4f5e-bcb8-77410729c10a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.007920089s
functional_test_pvc_test.go:101: (dbg) Run:  kubectl --context functional-20220125160520-11219 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:107: (dbg) Run:  kubectl --context functional-20220125160520-11219 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:107: (dbg) Done: kubectl --context functional-20220125160520-11219 delete -f testdata/storage-provisioner/pod.yaml: (1.070445103s)
functional_test_pvc_test.go:126: (dbg) Run:  kubectl --context functional-20220125160520-11219 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [198a2e8d-f839-4dbc-bb13-edfc89be0adb] Pending
helpers_test.go:343: "sp-pod" [198a2e8d-f839-4dbc-bb13-edfc89be0adb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [198a2e8d-f839-4dbc-bb13-edfc89be0adb] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:131: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007959488s
functional_test_pvc_test.go:115: (dbg) Run:  kubectl --context functional-20220125160520-11219 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1576: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "echo hello"
functional_test.go:1593: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh -n functional-20220125160520-11219 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 cp functional-20220125160520-11219:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mk_test1583688861/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh -n functional-20220125160520-11219 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1645: (dbg) Run:  kubectl --context functional-20220125160520-11219 replace --force -f testdata/mysql.yaml
functional_test.go:1651: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-hdwpb" [8951337b-9195-4a01-abe8-9b1810a040a1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-b87c45988-hdwpb" [8951337b-9195-4a01-abe8-9b1810a040a1] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1651: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.023340322s
functional_test.go:1659: (dbg) Run:  kubectl --context functional-20220125160520-11219 exec mysql-b87c45988-hdwpb -- mysql -ppassword -e "show databases;"
functional_test.go:1659: (dbg) Non-zero exit: kubectl --context functional-20220125160520-11219 exec mysql-b87c45988-hdwpb -- mysql -ppassword -e "show databases;": exit status 1 (184.287169ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1659: (dbg) Run:  kubectl --context functional-20220125160520-11219 exec mysql-b87c45988-hdwpb -- mysql -ppassword -e "show databases;"
functional_test.go:1659: (dbg) Non-zero exit: kubectl --context functional-20220125160520-11219 exec mysql-b87c45988-hdwpb -- mysql -ppassword -e "show databases;": exit status 1 (131.279609ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1659: (dbg) Run:  kubectl --context functional-20220125160520-11219 exec mysql-b87c45988-hdwpb -- mysql -ppassword -e "show databases;"
functional_test.go:1659: (dbg) Non-zero exit: kubectl --context functional-20220125160520-11219 exec mysql-b87c45988-hdwpb -- mysql -ppassword -e "show databases;": exit status 1 (138.002855ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1659: (dbg) Run:  kubectl --context functional-20220125160520-11219 exec mysql-b87c45988-hdwpb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.52s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1781: Checking for existence of /etc/test/nested/copy/11219/hosts within VM
functional_test.go:1783: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "sudo cat /etc/test/nested/copy/11219/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1788: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1824: Checking for existence of /etc/ssl/certs/11219.pem within VM
functional_test.go:1825: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "sudo cat /etc/ssl/certs/11219.pem"
functional_test.go:1824: Checking for existence of /usr/share/ca-certificates/11219.pem within VM
functional_test.go:1825: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "sudo cat /usr/share/ca-certificates/11219.pem"
functional_test.go:1824: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1825: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1851: Checking for existence of /etc/ssl/certs/112192.pem within VM
functional_test.go:1852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "sudo cat /etc/ssl/certs/112192.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1851: Checking for existence of /usr/share/ca-certificates/112192.pem within VM
functional_test.go:1852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "sudo cat /usr/share/ca-certificates/112192.pem"
functional_test.go:1851: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1852: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-20220125160520-11219 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1879: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1879: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "sudo systemctl is-active crio": exit status 1 (636.077156ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 version -o=json --components
functional_test.go:2125: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 version -o=json --components: (1.161457065s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.2
k8s.gcr.io/kube-proxy:v1.23.2
k8s.gcr.io/kube-controller-manager:v1.23.2
k8s.gcr.io/kube-apiserver:v1.23.2
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220125160520-11219
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20220125160520-11219
docker.io/kubernetesui/metrics-scraper:v1.0.7
docker.io/kubernetesui/dashboard:v2.3.1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| docker.io/library/nginx                     | alpine                          | bef258acf10dc | 23.4MB |
| k8s.gcr.io/kube-proxy                       | v1.23.2                         | d922ca3da64b3 | 112MB  |
| k8s.gcr.io/pause                            | 3.6                             | 6270bb605e12e | 683kB  |
| docker.io/kubernetesui/metrics-scraper      | v1.0.7                          | 7801cfc6d5c07 | 34.4MB |
| k8s.gcr.io/pause                            | 3.1                             | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/echoserver                       | 1.8                             | 82e4c8a736a4f | 95.4MB |
| gcr.io/k8s-minikube/busybox                 | latest                          | beae173ccac6a | 1.24MB |
| docker.io/library/nginx                     | latest                          | 605c77e624ddb | 141MB  |
| docker.io/kubernetesui/dashboard            | v2.3.1                          | e1482a24335a6 | 220MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-20220125160520-11219 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/pause                            | latest                          | 350b164e7ae1d | 240kB  |
| docker.io/localhost/my-image                | functional-20220125160520-11219 | 50077987b3a88 | 1.24MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                         | 25f8c7f3da61c | 293MB  |
| k8s.gcr.io/pause                            | 3.3                             | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-20220125160520-11219 | ba97c5093676b | 30B    |
| docker.io/library/mysql                     | 5.7                             | 42f82e150ec28 | 448MB  |
| k8s.gcr.io/kube-apiserver                   | v1.23.2                         | 8a0228dd6a683 | 135MB  |
| k8s.gcr.io/kube-scheduler                   | v1.23.2                         | 6114d758d6d16 | 53.5MB |
| k8s.gcr.io/kube-controller-manager          | v1.23.2                         | 4783639ba7e03 | 125MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | a4ca41631cc7a | 46.8MB |
|---------------------------------------------|---------------------------------|---------------|--------|
E0125 16:11:53.933298   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:12:21.673412   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls --format json:
[{"id":"6114d758d6d16d5b75586c98f8fb524d348fcbb125fb9be1e942dc7e91bbc5b4","repoDigests":[],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.2"],"size":"53500000"},{"id":"d922ca3da64b3f8464058d9ebbc361dd82cc86ea59cd337a4e33967bc8ede44f","repoDigests":[],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.2"],"size":"112000000"},{"id":"605c77e624ddb75e6110f997c58876baa13f8754486b461117934b24a9dc3a85","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"141000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23400000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a854662
9d","repoDigests":[],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"293000000"},{"id":"a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":[],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"46800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:v1.0.7"],"size":"34400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"8a0228dd6a683beecf635200927ab22cc4d9fb4302c340cae4a4c4b2b146aa24","repoDigests":[],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.2"],"size":"135000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:funct
ional-20220125160520-11219"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:v2.3.1"],"size":"220000000"},{"id":"ba97c5093676b831b1317f383646d8fdc1c9458ddb2ed54723b3dc0d7a77de0a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220125160520-11219"],"size":"30"},{"id":"42f82e150ec28054e528d2de42299225b0985530bc7e2f688941e5e323b372f3","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"448000000"},{"id":"4783639ba7e039dff291e4a9cc8a72f5f7c5bdd7f3441b57d3b5eb251cacc248","repoDigests":[],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.2"],"size":"125000000"},{"i
d":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"50077987b3a8881bd601f1613ae341c407dcb894b4f24c9f201db78bb516c9a5","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-20220125160520-11219"],"size":"1240000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls --format yaml:
- id: 4783639ba7e039dff291e4a9cc8a72f5f7c5bdd7f3441b57d3b5eb251cacc248
repoDigests: []
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.2
size: "125000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220125160520-11219
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 42f82e150ec28054e528d2de42299225b0985530bc7e2f688941e5e323b372f3
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "448000000"
- id: a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests: []
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "46800000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests: []
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "293000000"
- id: bef258acf10dc257d641c47c3a600c92f87be4b4ce4a5e4752b3eade7533dcd9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23400000"
- id: 8a0228dd6a683beecf635200927ab22cc4d9fb4302c340cae4a4c4b2b146aa24
repoDigests: []
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.2
size: "135000000"
- id: 6114d758d6d16d5b75586c98f8fb524d348fcbb125fb9be1e942dc7e91bbc5b4
repoDigests: []
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.2
size: "53500000"
- id: d922ca3da64b3f8464058d9ebbc361dd82cc86ea59cd337a4e33967bc8ede44f
repoDigests: []
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.2
size: "112000000"
- id: 7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:v1.0.7
size: "34400000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: ba97c5093676b831b1317f383646d8fdc1c9458ddb2ed54723b3dc0d7a77de0a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220125160520-11219
size: "30"
- id: e1482a24335a6e76d438ae175f79409004588570d3e5dbb4c8140e025e848570
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:v2.3.1
size: "220000000"
- id: 605c77e624ddb75e6110f997c58876baa13f8754486b461117934b24a9dc3a85
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "141000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh pgrep buildkitd: exit status 1 (585.969498ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image build -t localhost/my-image:functional-20220125160520-11219 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 image build -t localhost/my-image:functional-20220125160520-11219 testdata/build: (2.525506698s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-20220125160520-11219 image build -t localhost/my-image:functional-20220125160520-11219 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 973fca18de81
Removing intermediate container 973fca18de81
---> cba0883a199f
Step 3/3 : ADD content.txt /
---> 50077987b3a8
Successfully built 50077987b3a8
Successfully tagged localhost/my-image:functional-20220125160520-11219
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.359836682s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220125160520-11219
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220125160520-11219 docker-env) && out/minikube-darwin-amd64 status -p functional-20220125160520-11219"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220125160520-11219 docker-env) && out/minikube-darwin-amd64 status -p functional-20220125160520-11219": (1.673817809s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220125160520-11219 docker-env) && docker images"
functional_test.go:518: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-20220125160520-11219 docker-env) && docker images": (1.195451176s)
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220125160520-11219

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220125160520-11219: (3.566931638s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1971: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1971: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1971: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220125160520-11219

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220125160520-11219: (2.6955513s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220125160520-11219
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220125160520-11219
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220125160520-11219: (5.293218758s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image save gcr.io/google-containers/addon-resizer:functional-20220125160520-11219 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 image save gcr.io/google-containers/addon-resizer:functional-20220125160520-11219 /Users/jenkins/workspace/addon-resizer-save.tar: (2.038735243s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image rm gcr.io/google-containers/addon-resizer:functional-20220125160520-11219
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 image load /Users/jenkins/workspace/addon-resizer-save.tar: (2.203608294s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220125160520-11219
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220125160520-11219

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-20220125160520-11219 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220125160520-11219: (2.778320786s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220125160520-11219
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1272: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1277: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1312: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1317: Took "653.973672ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1326: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1331: Took "71.035774ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1363: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1368: Took "694.058208ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1376: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1381: Took "159.97515ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:128: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-20220125160520-11219 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:148: (dbg) Run:  kubectl --context functional-20220125160520-11219 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:343: "nginx-svc" [4b72b92a-022e-4bdc-9754-9487724cb4b2] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [4b72b92a-022e-4bdc-9754-9487724cb4b2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:343: "nginx-svc" [4b72b92a-022e-4bdc-9754-9487724cb4b2] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:152: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.020018623s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:170: (dbg) Run:  kubectl --context functional-20220125160520-11219 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (3.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
E0125 16:09:37.834655   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
functional_test_tunnel_test.go:235: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (3.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:370: (dbg) stopping [out/minikube-darwin-amd64 -p functional-20220125160520-11219 tunnel --alsologtostderr] ...
helpers_test.go:501: unable to terminate pid 13871: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220125160520-11219 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest1207793011:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1643155791501551000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest1207793011/created-by-test
functional_test_mount_test.go:110: wrote "test-1643155791501551000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest1207793011/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1643155791501551000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest1207793011/test-1643155791501551000
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (630.712015ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh -- ls -la /mount-9p
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 26 00:09 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 26 00:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 26 00:09 test-1643155791501551000
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh cat /mount-9p/test-1643155791501551000

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20220125160520-11219 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [c848720a-b1cd-4bed-b001-86e68662a8a5] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [c848720a-b1cd-4bed-b001-86e68662a8a5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [c848720a-b1cd-4bed-b001-86e68662a8a5] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.01586021s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20220125160520-11219 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220125160520-11219 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest1207793011:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-20220125160520-11219 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest400958284:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (631.295113ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220125160520-11219 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest400958284:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh "sudo umount -f /mount-9p": exit status 1 (576.298838ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-darwin-amd64 -p functional-20220125160520-11219 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-20220125160520-11219 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mounttest400958284:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (3.38s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.24s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220125160520-11219
--- PASS: TestFunctional/delete_addon-resizer_images (0.24s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220125160520-11219
--- PASS: TestFunctional/delete_my-image_image (0.11s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220125160520-11219
--- PASS: TestFunctional/delete_minikube_cached_images (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (134.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220125161515-11219 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0125 16:16:53.932419   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube-darwin-amd64 start -p ingress-addon-legacy-20220125161515-11219 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : (2m14.381485304s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (134.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220125161515-11219 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube-darwin-amd64 -p ingress-addon-legacy-20220125161515-11219 addons enable ingress --alsologtostderr -v=5: (14.392844861s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.62s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220125161515-11219 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.62s)

                                                
                                    
x
+
TestJSONOutput/start/Command (124.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-20220125161845-11219 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0125 16:19:04.890275   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:04.895812   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:04.906178   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:04.928036   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:04.970621   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:05.052923   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:05.220604   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:05.542852   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:06.187144   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:07.470690   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:10.037163   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:15.163688   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:25.404184   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:19:45.886458   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:20:26.846711   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-20220125161845-11219 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (2m4.464423774s)
--- PASS: TestJSONOutput/start/Command (124.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.8s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-20220125161845-11219 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.80s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-20220125161845-11219 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (17.04s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-20220125161845-11219 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-20220125161845-11219 --output=json --user=testUser: (17.043609028s)
--- PASS: TestJSONOutput/stop/Command (17.04s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.73s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-20220125162114-11219 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-20220125162114-11219 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (122.981849ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4f257fbb-3bd4-464d-b13a-fffe524894e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220125162114-11219] minikube v1.25.1 on Darwin 11.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c57cd945-12ce-4822-90fe-7f1353a7ce5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13326"}}
	{"specversion":"1.0","id":"7e10ed6e-c326-424a-b56d-f2d618ac9f47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig"}}
	{"specversion":"1.0","id":"8ccde328-8a90-4c5a-b34d-8d6b786e7e49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"579adf94-4c1c-49df-8f2a-d0db6cf83c54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"92a259de-ca16-4acd-a6f9-5342cb1b6d8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube"}}
	{"specversion":"1.0","id":"69302d25-5cff-42be-ac8c-83344db33705","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20220125162114-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-20220125162114-11219
--- PASS: TestErrorJSONOutput (0.73s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (87.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220125162115-11219 --network=
E0125 16:21:48.768516   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:21:53.934955   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220125162115-11219 --network=: (1m14.700097493s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220125162115-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220125162115-11219
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220125162115-11219: (13.147471801s)
--- PASS: TestKicCustomNetwork/create_custom_network (87.95s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (73.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-20220125162243-11219 --network=bridge
E0125 16:22:44.555152   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:22:44.560557   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:22:44.570726   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:22:44.597196   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:22:44.647233   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:22:44.734716   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:22:44.894881   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:22:45.215731   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:22:45.856146   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:22:47.136670   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:22:49.704958   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:22:54.834382   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:23:05.084134   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:23:17.026086   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:23:25.564165   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
kic_custom_network_test.go:58: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-20220125162243-11219 --network=bridge: (1m4.65726133s)
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-20220125162243-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-20220125162243-11219
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-20220125162243-11219: (9.16383775s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (73.93s)

                                                
                                    
x
+
TestKicExistingNetwork (85.9s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:102: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-20220125162402-11219 --network=existing-network
E0125 16:24:04.893070   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:24:06.533365   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:24:32.608555   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
kic_custom_network_test.go:94: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-20220125162402-11219 --network=existing-network: (1m7.808051618s)
helpers_test.go:176: Cleaning up "existing-network-20220125162402-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-20220125162402-11219
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-20220125162402-11219: (12.729680758s)
--- PASS: TestKicExistingNetwork (85.90s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (46.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-20220125162523-11219 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
E0125 16:25:28.457747   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-20220125162523-11219 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (45.232663502s)
--- PASS: TestMountStart/serial/StartWithMountFirst (46.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.63s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-20220125162523-11219 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.63s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (48.58s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220125162523-11219 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
E0125 16:26:53.944158   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
mount_start_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220125162523-11219 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (47.575330628s)
--- PASS: TestMountStart/serial/StartWithMountSecond (48.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.57s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220125162523-11219 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.57s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (12.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:134: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-20220125162523-11219 --alsologtostderr -v=5
pause_test.go:134: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-20220125162523-11219 --alsologtostderr -v=5: (12.735213533s)
--- PASS: TestMountStart/serial/DeleteFirst (12.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.58s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220125162523-11219 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.58s)

                                                
                                    
x
+
TestMountStart/serial/Stop (7.15s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-20220125162523-11219
mount_start_test.go:156: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-20220125162523-11219: (7.153940256s)
--- PASS: TestMountStart/serial/Stop (7.15s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (29.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:167: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-20220125162523-11219
E0125 16:27:44.569777   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
mount_start_test.go:167: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-20220125162523-11219: (28.539537877s)
--- PASS: TestMountStart/serial/RestartStopped (29.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-20220125162523-11219 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (233.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220125162801-11219 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0125 16:28:12.313757   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:29:04.896956   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:31:53.932479   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220125162801-11219 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (3m52.824136151s)
multinode_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --alsologtostderr: (1.090000407s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (233.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: (2.014576244s)
multinode_test.go:491: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- rollout status deployment/busybox
multinode_test.go:491: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- rollout status deployment/busybox: (3.291748802s)
multinode_test.go:497: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:517: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- exec busybox-7978565885-jdp72 -- nslookup kubernetes.io
multinode_test.go:517: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- exec busybox-7978565885-jjxrf -- nslookup kubernetes.io
multinode_test.go:527: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- exec busybox-7978565885-jdp72 -- nslookup kubernetes.default
multinode_test.go:527: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- exec busybox-7978565885-jjxrf -- nslookup kubernetes.default
multinode_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- exec busybox-7978565885-jdp72 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:535: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- exec busybox-7978565885-jjxrf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:545: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:553: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- exec busybox-7978565885-jdp72 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- exec busybox-7978565885-jdp72 -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:553: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- exec busybox-7978565885-jjxrf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:561: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-20220125162801-11219 -- exec busybox-7978565885-jjxrf -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (110.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220125162801-11219 -v 3 --alsologtostderr
E0125 16:32:44.561703   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-20220125162801-11219 -v 3 --alsologtostderr: (1m48.723047697s)
multinode_test.go:117: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --alsologtostderr: (1.481160168s)
--- PASS: TestMultiNode/serial/AddNode (110.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (21.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --output json --alsologtostderr: (1.484464932s)
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 cp testdata/cp-test.txt multinode-20220125162801-11219:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 cp multinode-20220125162801-11219:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mk_cp_test1721565568/cp-test_multinode-20220125162801-11219.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 cp multinode-20220125162801-11219:/home/docker/cp-test.txt multinode-20220125162801-11219-m02:/home/docker/cp-test_multinode-20220125162801-11219_multinode-20220125162801-11219-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m02 "sudo cat /home/docker/cp-test_multinode-20220125162801-11219_multinode-20220125162801-11219-m02.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 cp multinode-20220125162801-11219:/home/docker/cp-test.txt multinode-20220125162801-11219-m03:/home/docker/cp-test_multinode-20220125162801-11219_multinode-20220125162801-11219-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m03 "sudo cat /home/docker/cp-test_multinode-20220125162801-11219_multinode-20220125162801-11219-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 cp testdata/cp-test.txt multinode-20220125162801-11219-m02:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 cp multinode-20220125162801-11219-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mk_cp_test1721565568/cp-test_multinode-20220125162801-11219-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 cp multinode-20220125162801-11219-m02:/home/docker/cp-test.txt multinode-20220125162801-11219:/home/docker/cp-test_multinode-20220125162801-11219-m02_multinode-20220125162801-11219.txt
E0125 16:34:04.892705   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219 "sudo cat /home/docker/cp-test_multinode-20220125162801-11219-m02_multinode-20220125162801-11219.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 cp multinode-20220125162801-11219-m02:/home/docker/cp-test.txt multinode-20220125162801-11219-m03:/home/docker/cp-test_multinode-20220125162801-11219-m02_multinode-20220125162801-11219-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m03 "sudo cat /home/docker/cp-test_multinode-20220125162801-11219-m02_multinode-20220125162801-11219-m03.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 cp testdata/cp-test.txt multinode-20220125162801-11219-m03:/home/docker/cp-test.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 cp multinode-20220125162801-11219-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/mk_cp_test1721565568/cp-test_multinode-20220125162801-11219-m03.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 cp multinode-20220125162801-11219-m03:/home/docker/cp-test.txt multinode-20220125162801-11219:/home/docker/cp-test_multinode-20220125162801-11219-m03_multinode-20220125162801-11219.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219 "sudo cat /home/docker/cp-test_multinode-20220125162801-11219-m03_multinode-20220125162801-11219.txt"
helpers_test.go:555: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 cp multinode-20220125162801-11219-m03:/home/docker/cp-test.txt multinode-20220125162801-11219-m02:/home/docker/cp-test_multinode-20220125162801-11219-m03_multinode-20220125162801-11219-m02.txt
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:533: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 ssh -n multinode-20220125162801-11219-m02 "sudo cat /home/docker/cp-test_multinode-20220125162801-11219-m03_multinode-20220125162801-11219-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (21.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (10.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:215: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 node stop m03
multinode_test.go:215: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 node stop m03: (8.122857952s)
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status: exit status 7 (1.167230982s)

                                                
                                                
-- stdout --
	multinode-20220125162801-11219
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220125162801-11219-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220125162801-11219-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --alsologtostderr
multinode_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --alsologtostderr: exit status 7 (1.253938227s)

                                                
                                                
-- stdout --
	multinode-20220125162801-11219
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220125162801-11219-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220125162801-11219-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0125 16:34:24.838627   18257 out.go:297] Setting OutFile to fd 1 ...
	I0125 16:34:24.838761   18257 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:34:24.838765   18257 out.go:310] Setting ErrFile to fd 2...
	I0125 16:34:24.838769   18257 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:34:24.838837   18257 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/bin
	I0125 16:34:24.839010   18257 out.go:304] Setting JSON to false
	I0125 16:34:24.839029   18257 mustload.go:65] Loading cluster: multinode-20220125162801-11219
	I0125 16:34:24.839277   18257 config.go:176] Loaded profile config "multinode-20220125162801-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 16:34:24.839289   18257 status.go:253] checking status of multinode-20220125162801-11219 ...
	I0125 16:34:24.839642   18257 cli_runner.go:133] Run: docker container inspect multinode-20220125162801-11219 --format={{.State.Status}}
	I0125 16:34:24.948981   18257 status.go:328] multinode-20220125162801-11219 host status = "Running" (err=<nil>)
	I0125 16:34:24.949013   18257 host.go:66] Checking if "multinode-20220125162801-11219" exists ...
	I0125 16:34:24.949306   18257 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220125162801-11219
	I0125 16:34:25.059282   18257 host.go:66] Checking if "multinode-20220125162801-11219" exists ...
	I0125 16:34:25.059583   18257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0125 16:34:25.059653   18257 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220125162801-11219
	I0125 16:34:25.169597   18257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52856 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/multinode-20220125162801-11219/id_rsa Username:docker}
	I0125 16:34:25.262585   18257 ssh_runner.go:195] Run: systemctl --version
	I0125 16:34:25.266857   18257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0125 16:34:25.275877   18257 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-20220125162801-11219
	I0125 16:34:25.471171   18257 kubeconfig.go:92] found "multinode-20220125162801-11219" server: "https://127.0.0.1:52855"
	I0125 16:34:25.471196   18257 api_server.go:165] Checking apiserver status ...
	I0125 16:34:25.471236   18257 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0125 16:34:25.488250   18257 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1773/cgroup
	I0125 16:34:25.496937   18257 api_server.go:181] apiserver freezer: "7:freezer:/docker/09c4a22758c4dbe505c451bb8545353732f322fbba8a5f1e583296b73df9ead3/kubepods/burstable/podfad0942713ce2538f66cf7753fe2be6d/ae3de10f37cf6b5cc7a6f6380e7e6a08a9a174ab2ceadfdcce4d8f9812d9db69"
	I0125 16:34:25.496994   18257 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/09c4a22758c4dbe505c451bb8545353732f322fbba8a5f1e583296b73df9ead3/kubepods/burstable/podfad0942713ce2538f66cf7753fe2be6d/ae3de10f37cf6b5cc7a6f6380e7e6a08a9a174ab2ceadfdcce4d8f9812d9db69/freezer.state
	I0125 16:34:25.504090   18257 api_server.go:203] freezer state: "THAWED"
	I0125 16:34:25.504108   18257 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:52855/healthz ...
	I0125 16:34:25.509937   18257 api_server.go:266] https://127.0.0.1:52855/healthz returned 200:
	ok
	I0125 16:34:25.509950   18257 status.go:419] multinode-20220125162801-11219 apiserver status = Running (err=<nil>)
	I0125 16:34:25.509960   18257 status.go:255] multinode-20220125162801-11219 status: &{Name:multinode-20220125162801-11219 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0125 16:34:25.509979   18257 status.go:253] checking status of multinode-20220125162801-11219-m02 ...
	I0125 16:34:25.510258   18257 cli_runner.go:133] Run: docker container inspect multinode-20220125162801-11219-m02 --format={{.State.Status}}
	I0125 16:34:25.616781   18257 status.go:328] multinode-20220125162801-11219-m02 host status = "Running" (err=<nil>)
	I0125 16:34:25.616801   18257 host.go:66] Checking if "multinode-20220125162801-11219-m02" exists ...
	I0125 16:34:25.617088   18257 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220125162801-11219-m02
	I0125 16:34:25.724627   18257 host.go:66] Checking if "multinode-20220125162801-11219-m02" exists ...
	I0125 16:34:25.724915   18257 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0125 16:34:25.724985   18257 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220125162801-11219-m02
	I0125 16:34:25.832172   18257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53189 SSHKeyPath:/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/machines/multinode-20220125162801-11219-m02/id_rsa Username:docker}
	I0125 16:34:25.928246   18257 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0125 16:34:25.938133   18257 status.go:255] multinode-20220125162801-11219-m02 status: &{Name:multinode-20220125162801-11219-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0125 16:34:25.938158   18257 status.go:253] checking status of multinode-20220125162801-11219-m03 ...
	I0125 16:34:25.938456   18257 cli_runner.go:133] Run: docker container inspect multinode-20220125162801-11219-m03 --format={{.State.Status}}
	I0125 16:34:26.045915   18257 status.go:328] multinode-20220125162801-11219-m03 host status = "Stopped" (err=<nil>)
	I0125 16:34:26.045941   18257 status.go:341] host is not running, skipping remaining checks
	I0125 16:34:26.045948   18257 status.go:255] multinode-20220125162801-11219-m03 status: &{Name:multinode-20220125162801-11219-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (10.54s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (52.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:249: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 node start m03 --alsologtostderr
multinode_test.go:259: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 node start m03 --alsologtostderr: (50.680773102s)
multinode_test.go:266: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status
multinode_test.go:266: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status: (1.501764981s)
multinode_test.go:280: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (52.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (254.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220125162801-11219
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-20220125162801-11219
E0125 16:35:27.976639   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-20220125162801-11219: (40.578331352s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220125162801-11219 --wait=true -v=8 --alsologtostderr
E0125 16:36:53.938990   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 16:37:44.559016   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 16:39:04.890516   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 16:39:07.668305   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
multinode_test.go:300: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220125162801-11219 --wait=true -v=8 --alsologtostderr: (3m33.358857278s)
multinode_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220125162801-11219
--- PASS: TestMultiNode/serial/RestartKeepsNodes (254.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (15.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:399: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 node delete m03
multinode_test.go:399: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 node delete m03: (11.955055245s)
multinode_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --alsologtostderr
multinode_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --alsologtostderr: (1.113815266s)
multinode_test.go:419: (dbg) Run:  docker volume ls
multinode_test.go:429: (dbg) Run:  kubectl get nodes
multinode_test.go:429: (dbg) Done: kubectl get nodes: (1.77360068s)
multinode_test.go:437: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (15.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:319: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 stop
E0125 16:39:57.037784   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
multinode_test.go:319: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 stop: (23.671279475s)
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status: exit status 7 (252.575022ms)

                                                
                                                
-- stdout --
	multinode-20220125162801-11219
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220125162801-11219-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:332: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --alsologtostderr
multinode_test.go:332: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --alsologtostderr: exit status 7 (251.634852ms)

                                                
                                                
-- stdout --
	multinode-20220125162801-11219
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220125162801-11219-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0125 16:40:11.387200   19123 out.go:297] Setting OutFile to fd 1 ...
	I0125 16:40:11.387341   19123 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:40:11.387346   19123 out.go:310] Setting ErrFile to fd 2...
	I0125 16:40:11.387349   19123 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0125 16:40:11.387417   19123 root.go:315] Updating PATH: /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/bin
	I0125 16:40:11.387593   19123 out.go:304] Setting JSON to false
	I0125 16:40:11.387608   19123 mustload.go:65] Loading cluster: multinode-20220125162801-11219
	I0125 16:40:11.387880   19123 config.go:176] Loaded profile config "multinode-20220125162801-11219": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.2
	I0125 16:40:11.387894   19123 status.go:253] checking status of multinode-20220125162801-11219 ...
	I0125 16:40:11.388291   19123 cli_runner.go:133] Run: docker container inspect multinode-20220125162801-11219 --format={{.State.Status}}
	I0125 16:40:11.492392   19123 status.go:328] multinode-20220125162801-11219 host status = "Stopped" (err=<nil>)
	I0125 16:40:11.492422   19123 status.go:341] host is not running, skipping remaining checks
	I0125 16:40:11.492430   19123 status.go:255] multinode-20220125162801-11219 status: &{Name:multinode-20220125162801-11219 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0125 16:40:11.492476   19123 status.go:253] checking status of multinode-20220125162801-11219-m02 ...
	I0125 16:40:11.492790   19123 cli_runner.go:133] Run: docker container inspect multinode-20220125162801-11219-m02 --format={{.State.Status}}
	I0125 16:40:11.595202   19123 status.go:328] multinode-20220125162801-11219-m02 host status = "Stopped" (err=<nil>)
	I0125 16:40:11.595228   19123 status.go:341] host is not running, skipping remaining checks
	I0125 16:40:11.595235   19123 status.go:255] multinode-20220125162801-11219-m02 status: &{Name:multinode-20220125162801-11219-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (152.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:349: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:359: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220125162801-11219 --wait=true -v=8 --alsologtostderr --driver=docker 
E0125 16:41:53.929982   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
multinode_test.go:359: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220125162801-11219 --wait=true -v=8 --alsologtostderr --driver=docker : (2m29.669107215s)
multinode_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --alsologtostderr
multinode_test.go:365: (dbg) Done: out/minikube-darwin-amd64 -p multinode-20220125162801-11219 status --alsologtostderr: (1.075653863s)
multinode_test.go:379: (dbg) Run:  kubectl get nodes
multinode_test.go:379: (dbg) Done: kubectl get nodes: (1.765295301s)
multinode_test.go:387: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (152.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (99.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:448: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-20220125162801-11219
multinode_test.go:457: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220125162801-11219-m02 --driver=docker 
E0125 16:42:44.564429   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
multinode_test.go:457: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-20220125162801-11219-m02 --driver=docker : exit status 14 (358.855882ms)

                                                
                                                
-- stdout --
	* [multinode-20220125162801-11219-m02] minikube v1.25.1 on Darwin 11.1
	  - MINIKUBE_LOCATION=13326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220125162801-11219-m02' is duplicated with machine name 'multinode-20220125162801-11219-m02' in profile 'multinode-20220125162801-11219'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-20220125162801-11219-m03 --driver=docker 
E0125 16:44:04.891070   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
multinode_test.go:465: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-20220125162801-11219-m03 --driver=docker : (1m22.845901858s)
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-20220125162801-11219
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-20220125162801-11219: exit status 80 (612.076428ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220125162801-11219
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220125162801-11219-m03 already exists in multinode-20220125162801-11219-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:477: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-20220125162801-11219-m03
multinode_test.go:477: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-20220125162801-11219-m03: (16.018441183s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (99.88s)

                                                
                                    
x
+
TestPreload (217.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220125164445-11219 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0
E0125 16:46:53.933035   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
preload_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20220125164445-11219 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.17.0: (2m20.326017289s)
preload_test.go:62: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20220125164445-11219 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:62: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-20220125164445-11219 -- docker pull gcr.io/k8s-minikube/busybox: (2.224955099s)
preload_test.go:72: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-20220125164445-11219 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3
E0125 16:47:44.566111   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
preload_test.go:72: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-20220125164445-11219 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.17.3: (1m1.529920492s)
preload_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-20220125164445-11219 -- docker images
helpers_test.go:176: Cleaning up "test-preload-20220125164445-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-20220125164445-11219
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-20220125164445-11219: (12.920101722s)
--- PASS: TestPreload (217.64s)

                                                
                                    
x
+
TestScheduledStopUnix (152.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-20220125164823-11219 --memory=2048 --driver=docker 
E0125 16:49:04.895945   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
scheduled_stop_test.go:129: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-20220125164823-11219 --memory=2048 --driver=docker : (1m13.392856224s)
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220125164823-11219 --schedule 5m
scheduled_stop_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220125164823-11219 -n scheduled-stop-20220125164823-11219
scheduled_stop_test.go:170: signal error was:  <nil>
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220125164823-11219 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220125164823-11219 --cancel-scheduled
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220125164823-11219 -n scheduled-stop-20220125164823-11219
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220125164823-11219
scheduled_stop_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-20220125164823-11219 --schedule 15s
scheduled_stop_test.go:170: signal error was:  os: process already finished
scheduled_stop_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-20220125164823-11219
scheduled_stop_test.go:206: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-20220125164823-11219: exit status 7 (154.048095ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220125164823-11219
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220125164823-11219 -n scheduled-stop-20220125164823-11219
scheduled_stop_test.go:177: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-20220125164823-11219 -n scheduled-stop-20220125164823-11219: exit status 7 (154.451127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:177: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20220125164823-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-20220125164823-11219
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-20220125164823-11219: (6.226387119s)
--- PASS: TestScheduledStopUnix (152.28s)

                                                
                                    
x
+
TestInsufficientStorage (62.83s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-20220125165231-11219 --memory=2048 --output=json --wait=true --driver=docker 
E0125 16:52:44.565139   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
status_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-20220125165231-11219 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (49.932560346s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"63c73c76-17e3-4afd-8104-f58e60fff98c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220125165231-11219] minikube v1.25.1 on Darwin 11.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"65cfd88e-73fb-47b4-85a3-2810692189d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13326"}}
	{"specversion":"1.0","id":"ee344e08-1f04-403e-8760-d50afdb58cd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig"}}
	{"specversion":"1.0","id":"b0d41d7d-9553-442c-a1e7-1272d812b1db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"24d5f1c9-51b0-4a6f-b00a-f115d32315ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"55271023-feed-44b8-b371-4d1d6345bad5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube"}}
	{"specversion":"1.0","id":"99c0b83a-767b-46e5-8613-bc5d13fc1bd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ea7dc8c4-56f0-4a7e-929c-0d6130adada4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f2476b5-92e9-4ad1-a788-a4a36fc0eadb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220125165231-11219 in cluster insufficient-storage-20220125165231-11219","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fe49396-d56f-4722-a769-3647e5cde7ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f3d56d23-9e4f-4a8c-a1e9-935bc596d7f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fbf4fa07-4830-4904-a177-46edec5f0db8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220125165231-11219 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220125165231-11219 --output=json --layout=cluster: exit status 7 (582.537704ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220125165231-11219","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220125165231-11219","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0125 16:53:21.973658   21296 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220125165231-11219" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig

                                                
                                                
** /stderr **
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-20220125165231-11219 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-20220125165231-11219 --output=json --layout=cluster: exit status 7 (572.183748ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220125165231-11219","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220125165231-11219","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0125 16:53:22.546888   21313 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220125165231-11219" does not appear in /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	E0125 16:53:22.557826   21313 status.go:557] unable to read event log: stat: stat /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/insufficient-storage-20220125165231-11219/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-20220125165231-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-20220125165231-11219
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-20220125165231-11219: (11.74007789s)
--- PASS: TestInsufficientStorage (62.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (198.22s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220125170105-11219 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0125 17:01:53.981674   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
version_upgrade_test.go:229: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220125170105-11219 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : (1m10.498771393s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220125170105-11219

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-20220125170105-11219: (18.229853816s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-20220125170105-11219 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-20220125170105-11219 status --format={{.Host}}: exit status 7 (149.383379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220125170105-11219 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220125170105-11219 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker : (59.580786755s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220125170105-11219 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220125170105-11219 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220125170105-11219 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (396.518352ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220125170105-11219] minikube v1.25.1 on Darwin 11.1
	  - MINIKUBE_LOCATION=13326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.3-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220125170105-11219
	    minikube start -p kubernetes-upgrade-20220125170105-11219 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220125170105-112192 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.3-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220125170105-11219 --kubernetes-version=v1.23.3-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220125170105-11219 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker 
E0125 17:04:04.944321   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-20220125170105-11219 --memory=2200 --kubernetes-version=v1.23.3-rc.0 --alsologtostderr -v=1 --driver=docker : (31.88175516s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20220125170105-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220125170105-11219

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-20220125170105-11219: (17.312307228s)
--- PASS: TestKubernetesUpgrade (198.22s)

                                                
                                    
x
+
TestMissingContainerUpgrade (195.67s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2000485133.exe start -p missing-upgrade-20220125170104-11219 --memory=2200 --driver=docker 

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.1.2000485133.exe start -p missing-upgrade-20220125170104-11219 --memory=2200 --driver=docker : (1m20.342712409s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220125170104-11219

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220125170104-11219: (14.599436114s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220125170104-11219
version_upgrade_test.go:336: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-20220125170104-11219 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0125 17:02:44.615076   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-20220125170104-11219 --memory=2200 --alsologtostderr -v=1 --driver=docker : (1m26.596391718s)
helpers_test.go:176: Cleaning up "missing-upgrade-20220125170104-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-20220125170104-11219

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:179: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-20220125170104-11219: (13.52043529s)
--- PASS: TestMissingContainerUpgrade (195.67s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.43s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.25.1 on darwin
- MINIKUBE_LOCATION=13326
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.11.0-to-current416005507
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.11.0-to-current416005507/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.11.0-to-current416005507/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.11.0-to-current416005507/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.43s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.53s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.25.1 on darwin
- MINIKUBE_LOCATION=13326
- KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.2.0-to-current3558817794
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.2.0-to-current3558817794/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.2.0-to-current3558817794/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/upgrade-v1.2.0-to-current3558817794/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (143.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.4178707603.exe start -p stopped-upgrade-20220125170419-11219 --memory=2200 --vm-driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.4178707603.exe start -p stopped-upgrade-20220125170419-11219 --memory=2200 --vm-driver=docker : (1m20.178748621s)
version_upgrade_test.go:199: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.4178707603.exe -p stopped-upgrade-20220125170419-11219 stop
version_upgrade_test.go:199: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.4178707603.exe -p stopped-upgrade-20220125170419-11219 stop: (3.517866442s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-20220125170419-11219 --memory=2200 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-20220125170419-11219 --memory=2200 --alsologtostderr -v=1 --driver=docker : (59.383495524s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (143.08s)

                                                
                                    
x
+
TestPause/serial/Start (108.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:82: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220125170424-11219 --memory=2048 --install-addons=false --wait=all --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:82: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220125170424-11219 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m48.402486095s)
--- PASS: TestPause/serial/Start (108.40s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.19s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-20220125170424-11219 --alsologtostderr -v=1 --driver=docker 
pause_test.go:94: (dbg) Done: out/minikube-darwin-amd64 start -p pause-20220125170424-11219 --alsologtostderr -v=1 --driver=docker : (7.182246319s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.19s)

                                                
                                    
x
+
TestPause/serial/Pause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220125170424-11219 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.62s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:77: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-20220125170424-11219 --output=json --layout=cluster
status_test.go:77: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-20220125170424-11219 --output=json --layout=cluster: exit status 2 (615.859419ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20220125170424-11219","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.25.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20220125170424-11219","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.62s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-20220125170424-11219 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-20220125170424-11219 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (10.67s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:134: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-20220125170424-11219 --alsologtostderr -v=5
pause_test.go:134: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-20220125170424-11219 --alsologtostderr -v=5: (10.671018603s)
--- PASS: TestPause/serial/DeletePaused (10.67s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.02s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:144: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:170: (dbg) Run:  docker ps -a
pause_test.go:175: (dbg) Run:  docker volume inspect pause-20220125170424-11219
pause_test.go:175: (dbg) Non-zero exit: docker volume inspect pause-20220125170424-11219: exit status 1 (178.993897ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20220125170424-11219

                                                
                                                
** /stderr **
pause_test.go:180: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:84: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220125170635-11219 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:84: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-20220125170635-11219 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (336.966386ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220125170635-11219] minikube v1.25.1 on Darwin 11.1
	  - MINIKUBE_LOCATION=13326
	  - KUBECONFIG=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (55.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220125170635-11219 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220125170635-11219 --driver=docker : (55.244795954s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220125170635-11219 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (55.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-20220125170419-11219
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-20220125170419-11219: (2.74062238s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (104.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-20220125165334-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p auto-20220125165334-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (1m44.806906535s)
--- PASS: TestNetworkPlugins/group/auto/Start (104.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:113: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220125170635-11219 --no-kubernetes --driver=docker 
E0125 17:07:44.616523   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
no_kubernetes_test.go:113: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220125170635-11219 --no-kubernetes --driver=docker : (14.408663431s)
no_kubernetes_test.go:201: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-20220125170635-11219 status -o json
no_kubernetes_test.go:201: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-20220125170635-11219 status -o json: exit status 2 (604.256291ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220125170635-11219","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-20220125170635-11219
no_kubernetes_test.go:125: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-20220125170635-11219: (13.122822917s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (39.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220125170635-11219 --no-kubernetes --driver=docker 
no_kubernetes_test.go:137: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220125170635-11219 --no-kubernetes --driver=docker : (39.389151518s)
--- PASS: TestNoKubernetes/serial/Start (39.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220125170635-11219 "sudo systemctl is-active --quiet service kubelet"

                                                
                                                
=== CONT  TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220125170635-11219 "sudo systemctl is-active --quiet service kubelet": exit status 1 (808.08991ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-20220125165334-11219 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 profile list

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:170: (dbg) Done: out/minikube-darwin-amd64 profile list: (1.483061061s)
no_kubernetes_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:180: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (1.382768053s)
--- PASS: TestNoKubernetes/serial/ProfileList (2.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context auto-20220125165334-11219 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:132: (dbg) Done: kubectl --context auto-20220125165334-11219 replace --force -f testdata/netcat-deployment.yaml: (1.923878537s)
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-9nqn2" [11e768bd-113a-4e0d-8afd-10aee1533142] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
helpers_test.go:343: "netcat-668db85669-9nqn2" [11e768bd-113a-4e0d-8afd-10aee1533142] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/NetCatPod
net_test.go:146: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.009432134s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (4.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-20220125170635-11219

                                                
                                                
=== CONT  TestNoKubernetes/serial/Stop
no_kubernetes_test.go:159: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-20220125170635-11219: (4.720973535s)
--- PASS: TestNoKubernetes/serial/Stop (4.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (20.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-20220125170635-11219 --driver=docker 
E0125 17:08:48.039930   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:192: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-20220125170635-11219 --driver=docker : (20.885496724s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (20.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:163: (dbg) Run:  kubectl --context auto-20220125165334-11219 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:182: (dbg) Run:  kubectl --context auto-20220125165334-11219 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:232: (dbg) Run:  kubectl --context auto-20220125165334-11219 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context auto-20220125165334-11219 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.137307655s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (104.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p false-20220125165335-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
E0125 17:09:04.945911   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p false-20220125165335-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (1m44.950172233s)
--- PASS: TestNetworkPlugins/group/false/Start (104.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-20220125170635-11219 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-20220125170635-11219 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.470771146s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (110.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-20220125165335-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-20220125165335-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m50.39479457s)
--- PASS: TestNetworkPlugins/group/cilium/Start (110.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-20220125165335-11219 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context false-20220125165335-11219 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context false-20220125165335-11219 replace --force -f testdata/netcat-deployment.yaml: (1.937076253s)
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-wftz2" [c9a3db18-85bf-4a4a-8dd2-294b75695e03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-wftz2" [c9a3db18-85bf-4a4a-8dd2-294b75695e03] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.016481691s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:163: (dbg) Run:  kubectl --context false-20220125165335-11219 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:182: (dbg) Run:  kubectl --context false-20220125165335-11219 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:232: (dbg) Run:  kubectl --context false-20220125165335-11219 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:232: (dbg) Non-zero exit: kubectl --context false-20220125165335-11219 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.136573833s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-tllcr" [b1ed5784-7f0b-4c19-85e8-4e9fdbee2a40] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:107: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.014025725s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-20220125165335-11219 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (14.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context cilium-20220125165335-11219 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context cilium-20220125165335-11219 replace --force -f testdata/netcat-deployment.yaml: (2.408976395s)
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-wctx7" [5b27fef6-9945-4d34-94d0-fdc0b17a54f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-wctx7" [5b27fef6-9945-4d34-94d0-fdc0b17a54f2] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 12.008266643s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (14.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:163: (dbg) Run:  kubectl --context cilium-20220125165335-11219 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:182: (dbg) Run:  kubectl --context cilium-20220125165335-11219 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:232: (dbg) Run:  kubectl --context cilium-20220125165335-11219 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (100.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-weave-20220125165335-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker 
E0125 17:11:54.009395   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 17:12:27.765625   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 17:12:44.640011   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 17:13:17.124486   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p custom-weave-20220125165335-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker : (1m40.083738229s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (100.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-weave-20220125165335-11219 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (13.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context custom-weave-20220125165335-11219 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context custom-weave-20220125165335-11219 replace --force -f testdata/netcat-deployment.yaml: (1.927290367s)
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-rrvsb" [31645dd2-0829-4d36-a876-eceee320b7f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-rrvsb" [31645dd2-0829-4d36-a876-eceee320b7f6] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 12.011212867s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (13.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (57.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-20220125165334-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E0125 17:13:44.598958   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
E0125 17:13:47.167003   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
E0125 17:13:52.290578   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
E0125 17:14:02.540601   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
E0125 17:14:04.971282   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 17:14:23.024213   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-20220125165334-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (57.054487339s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (57.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-20220125165334-11219 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context enable-default-cni-20220125165334-11219 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context enable-default-cni-20220125165334-11219 replace --force -f testdata/netcat-deployment.yaml: (2.019934728s)
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-sp2cp" [c0486d28-1948-4730-a4fa-e382957e374d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-sp2cp" [c0486d28-1948-4730-a4fa-e382957e374d] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.01024274s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-20220125165334-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
E0125 17:21:15.464441   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:21:19.426129   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:21:43.164546   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:21:54.014688   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-20220125165334-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (1m14.685436928s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-20220125165334-11219 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context bridge-20220125165334-11219 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context bridge-20220125165334-11219 replace --force -f testdata/netcat-deployment.yaml: (2.027550548s)
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-snwv9" [b0808622-0d87-441a-9f3d-b84e0024bf35] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-snwv9" [b0808622-0d87-441a-9f3d-b84e0024bf35] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.008248382s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (344.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-20220125165334-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
E0125 17:26:15.470275   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:99: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-20220125165334-11219 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (5m44.946590657s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (344.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (165.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220125172750-11219 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0125 17:28:24.960766   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:28:42.027320   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
E0125 17:29:04.981635   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 17:29:07.782207   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 17:29:43.667693   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:29:57.140839   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 17:30:05.136716   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
E0125 17:30:11.418437   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-20220125172750-11219 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: (2m45.151667752s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (165.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220125172750-11219 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context old-k8s-version-20220125172750-11219 create -f testdata/busybox.yaml: (2.084223886s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [8acf65d7-25a4-4f3a-a5a4-8b045cb78a5d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [8acf65d7-25a4-4f3a-a5a4-8b045cb78a5d] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.013354618s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context old-k8s-version-20220125172750-11219 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-20220125172750-11219 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context old-k8s-version-20220125172750-11219 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (18.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-20220125172750-11219 --alsologtostderr -v=3
E0125 17:30:51.728517   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-20220125172750-11219 --alsologtostderr -v=3: (18.967548394s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (18.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220125172750-11219 -n old-k8s-version-20220125172750-11219
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220125172750-11219 -n old-k8s-version-20220125172750-11219: exit status 7 (146.077506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-20220125172750-11219 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (150.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-20220125172750-11219 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0125 17:31:15.464790   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:31:54.020916   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-20220125172750-11219 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: (2m29.899518589s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-20220125172750-11219 -n old-k8s-version-20220125172750-11219
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (150.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:120: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-20220125165334-11219 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:132: (dbg) Run:  kubectl --context kubenet-20220125165334-11219 replace --force -f testdata/netcat-deployment.yaml
net_test.go:132: (dbg) Done: kubectl --context kubenet-20220125165334-11219 replace --force -f testdata/netcat-deployment.yaml: (2.756706676s)
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-668db85669-zqrrn" [5434ad0a-44d5-4036-9d12-1dfbfd770073] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-668db85669-zqrrn" [5434ad0a-44d5-4036-9d12-1dfbfd770073] Running
net_test.go:146: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.015792786s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-bqcpl" [773b6804-9b1a-4ad9-ac00-2e2230b71197] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012668235s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-766959b846-bqcpl" [773b6804-9b1a-4ad9-ac00-2e2230b71197] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007580575s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context old-k8s-version-20220125172750-11219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context old-k8s-version-20220125172750-11219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (2.157391667s)
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-20220125172750-11219 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-20220125172750-11219 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220125172750-11219 -n old-k8s-version-20220125172750-11219
E0125 17:33:51.897198   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220125172750-11219 -n old-k8s-version-20220125172750-11219: exit status 2 (623.029295ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220125172750-11219 -n old-k8s-version-20220125172750-11219
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220125172750-11219 -n old-k8s-version-20220125172750-11219: exit status 2 (623.305214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-20220125172750-11219 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-20220125172750-11219 -n old-k8s-version-20220125172750-11219
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220125172750-11219 -n old-k8s-version-20220125172750-11219
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (114.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220125173411-11219 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220125173411-11219 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.3-rc.0: (1m54.72954935s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (114.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220125173411-11219 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context no-preload-20220125173411-11219 create -f testdata/busybox.yaml: (1.903225175s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [01ce8a43-4b78-45aa-a99b-3ee63e1af745] Pending
helpers_test.go:343: "busybox" [01ce8a43-4b78-45aa-a99b-3ee63e1af745] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [01ce8a43-4b78-45aa-a99b-3ee63e1af745] Running
E0125 17:36:15.468531   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.013023999s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context no-preload-20220125173411-11219 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-20220125173411-11219 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context no-preload-20220125173411-11219 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (19.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-20220125173411-11219 --alsologtostderr -v=3
E0125 17:36:19.047675   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-20220125173411-11219 --alsologtostderr -v=3: (19.085244389s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (19.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220125173411-11219 -n no-preload-20220125173411-11219
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220125173411-11219 -n no-preload-20220125173411-11219: exit status 7 (151.019513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-20220125173411-11219 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (75.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-20220125173411-11219 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.3-rc.0
E0125 17:36:54.023322   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 17:37:00.010638   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:37:29.942040   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:37:44.661200   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-20220125173411-11219 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.23.3-rc.0: (1m15.04680178s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-20220125173411-11219 -n no-preload-20220125173411-11219
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (75.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-2wstt" [7958827a-0817-4076-a6a1-1b113f9354fa] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-2wstt" [7958827a-0817-4076-a6a1-1b113f9354fa] Running
E0125 17:37:57.665460   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.022164821s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (327.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220125173758-11219 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220125173758-11219 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.2: (5m27.146163816s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (327.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-2wstt" [7958827a-0817-4076-a6a1-1b113f9354fa] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013130327s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context no-preload-20220125173411-11219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context no-preload-20220125173411-11219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.937361683s)
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-20220125173411-11219 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-20220125173411-11219 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-darwin-amd64 pause -p no-preload-20220125173411-11219 --alsologtostderr -v=1: (1.04352698s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220125173411-11219 -n no-preload-20220125173411-11219
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220125173411-11219 -n no-preload-20220125173411-11219: exit status 2 (737.987252ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220125173411-11219 -n no-preload-20220125173411-11219
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220125173411-11219 -n no-preload-20220125173411-11219: exit status 2 (744.873178ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-20220125173411-11219 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Done: out/minikube-darwin-amd64 unpause -p no-preload-20220125173411-11219 --alsologtostderr -v=1: (1.24407247s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-20220125173411-11219 -n no-preload-20220125173411-11219
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-20220125173411-11219 -n no-preload-20220125173411-11219
--- PASS: TestStartStop/group/no-preload/serial/Pause (5.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (316.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220125173828-11219 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.2
E0125 17:38:42.037571   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
E0125 17:39:04.994935   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 17:39:43.678127   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:40:38.045918   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:40:51.747715   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:41:05.800326   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:41:06.800514   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:41:07.760262   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:07.767790   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:07.777881   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:07.802635   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:07.850766   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:07.935762   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:08.100744   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:08.425786   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:09.075917   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:10.359624   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:12.926505   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:15.486501   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:41:18.052264   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:28.302921   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:48.784379   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:41:54.047767   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 17:42:01.609395   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:01.615228   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:01.626130   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:01.651265   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:01.691741   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:01.774351   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:01.935449   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:02.255715   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:02.896745   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:04.177814   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:06.737990   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:08.101253   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
E0125 17:42:11.858676   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:22.100360   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:29.745885   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:42:29.966793   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:42.581683   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:42:44.674405   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 17:43:23.544784   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:43:24.989505   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220125173828-11219 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.2: (5m16.446125826s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (316.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220125173758-11219 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context embed-certs-20220125173758-11219 create -f testdata/busybox.yaml: (1.886240464s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [bac787b6-ebc9-4a44-a9fd-e1cafdc36e00] Pending
helpers_test.go:343: "busybox" [bac787b6-ebc9-4a44-a9fd-e1cafdc36e00] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [bac787b6-ebc9-4a44-a9fd-e1cafdc36e00] Running
start_stop_delete_test.go:181: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.016031529s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context embed-certs-20220125173758-11219 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-20220125173758-11219 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context embed-certs-20220125173758-11219 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (19.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-20220125173758-11219 --alsologtostderr -v=3
E0125 17:43:42.051923   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-20220125173758-11219 --alsologtostderr -v=3: (19.447689299s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (19.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220125173828-11219 create -f testdata/busybox.yaml
start_stop_delete_test.go:181: (dbg) Done: kubectl --context default-k8s-different-port-20220125173828-11219 create -f testdata/busybox.yaml: (1.945179002s)
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [bacddaa7-5e9b-4e18-b0a6-4f2149e53197] Pending
helpers_test.go:343: "busybox" [bacddaa7-5e9b-4e18-b0a6-4f2149e53197] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [bacddaa7-5e9b-4e18-b0a6-4f2149e53197] Running
E0125 17:43:51.666642   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
start_stop_delete_test.go:181: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 8.014588218s
start_stop_delete_test.go:181: (dbg) Run:  kubectl --context default-k8s-different-port-20220125173828-11219 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (10.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-different-port-20220125173828-11219 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:200: (dbg) Run:  kubectl --context default-k8s-different-port-20220125173828-11219 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220125173758-11219 -n embed-certs-20220125173758-11219
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220125173758-11219 -n embed-certs-20220125173758-11219: exit status 7 (176.382554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-20220125173758-11219 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (13.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220125173828-11219 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-different-port-20220125173828-11219 --alsologtostderr -v=3: (13.237426085s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (13.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (302.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-20220125173758-11219 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.2
E0125 17:44:05.006374   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-20220125173758-11219 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.23.2: (5m1.730532681s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-20220125173758-11219 -n embed-certs-20220125173758-11219
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (302.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220125173828-11219 -n default-k8s-different-port-20220125173828-11219
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220125173828-11219 -n default-k8s-different-port-20220125173828-11219: exit status 7 (150.566799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-different-port-20220125173828-11219 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (308.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-different-port-20220125173828-11219 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.2
E0125 17:44:43.692124   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/enable-default-cni-20220125165334-11219/client.crt: no such file or directory
E0125 17:44:45.465697   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:45:38.069185   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:45:47.807658   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 17:45:51.749284   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory
E0125 17:46:07.759219   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:46:15.492568   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
E0125 17:46:35.508179   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:46:37.166430   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 17:46:45.163970   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
E0125 17:46:54.044758   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 17:47:01.606507   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:47:29.307258   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
E0125 17:47:29.966848   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:47:44.675388   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/ingress-addon-legacy-20220125161515-11219/client.crt: no such file or directory
E0125 17:48:24.987052   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:48:42.060178   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/auto-20220125165334-11219/client.crt: no such file or directory
E0125 17:48:53.049633   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
E0125 17:48:54.889066   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/false-20220125165335-11219/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-different-port-20220125173828-11219 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.23.2: (5m7.82577793s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-different-port-20220125173828-11219 -n default-k8s-different-port-20220125173828-11219
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (308.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-th5qr" [2aa944e7-256e-4e09-aff7-89fbc45faafe] Running
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-th5qr" [2aa944e7-256e-4e09-aff7-89fbc45faafe] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:259: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015238304s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-th5qr" [2aa944e7-256e-4e09-aff7-89fbc45faafe] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0125 17:49:05.007901   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/functional-20220125160520-11219/client.crt: no such file or directory
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008512716s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context embed-certs-20220125173758-11219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context embed-certs-20220125173758-11219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (2.000101966s)
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-20220125173758-11219 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-20220125173758-11219 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220125173758-11219 -n embed-certs-20220125173758-11219
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220125173758-11219 -n embed-certs-20220125173758-11219: exit status 2 (646.627613ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220125173758-11219 -n embed-certs-20220125173758-11219
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220125173758-11219 -n embed-certs-20220125173758-11219: exit status 2 (644.577454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-20220125173758-11219 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-20220125173758-11219 -n embed-certs-20220125173758-11219
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-20220125173758-11219 -n embed-certs-20220125173758-11219
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
E0125 17:49:18.565442   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-sttnn" [b6b11253-5e44-4054-ba4b-382b12aab613] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:259: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017652811s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-ccd587f44-sttnn" [b6b11253-5e44-4054-ba4b-382b12aab613] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008869566s
start_stop_delete_test.go:276: (dbg) Run:  kubectl --context default-k8s-different-port-20220125173828-11219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:276: (dbg) Done: kubectl --context default-k8s-different-port-20220125173828-11219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.912128187s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (6.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-different-port-20220125173828-11219 "sudo crictl images -o json"
start_stop_delete_test.go:289: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (4.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-different-port-20220125173828-11219 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220125173828-11219 -n default-k8s-different-port-20220125173828-11219

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220125173828-11219 -n default-k8s-different-port-20220125173828-11219: exit status 2 (687.820949ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220125173828-11219 -n default-k8s-different-port-20220125173828-11219
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220125173828-11219 -n default-k8s-different-port-20220125173828-11219: exit status 2 (700.040206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220125173828-11219 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:296: (dbg) Done: out/minikube-darwin-amd64 unpause -p default-k8s-different-port-20220125173828-11219 --alsologtostderr -v=1: (1.100111006s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220125173828-11219 -n default-k8s-different-port-20220125173828-11219
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220125173828-11219 -n default-k8s-different-port-20220125173828-11219
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (4.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (89.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220125174933-11219 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.3-rc.0

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:171: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220125174933-11219 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.3-rc.0: (1m29.89498234s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (89.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:190: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-20220125174933-11219 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:196: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (17.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-20220125174933-11219 --alsologtostderr -v=3
E0125 17:51:07.760024   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/no-preload-20220125173411-11219/client.crt: no such file or directory
E0125 17:51:15.497222   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/cilium-20220125165335-11219/client.crt: no such file or directory
start_stop_delete_test.go:213: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-20220125174933-11219 --alsologtostderr -v=3: (17.3353228s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (17.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:224: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220125174933-11219 -n newest-cni-20220125174933-11219
start_stop_delete_test.go:224: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220125174933-11219 -n newest-cni-20220125174933-11219: exit status 7 (147.811916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:224: status error: exit status 7 (may be ok)
start_stop_delete_test.go:231: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-20220125174933-11219 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (66.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-20220125174933-11219 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.3-rc.0
E0125 17:51:28.099685   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/custom-weave-20220125165335-11219/client.crt: no such file or directory
E0125 17:51:54.057000   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/addons-20220125155914-11219/client.crt: no such file or directory
E0125 17:52:01.168744   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/old-k8s-version-20220125172750-11219/client.crt: no such file or directory
E0125 17:52:01.612904   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/kubenet-20220125165334-11219/client.crt: no such file or directory
start_stop_delete_test.go:241: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-20220125174933-11219 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.23.3-rc.0: (1m5.591336765s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-20220125174933-11219 -n newest-cni-20220125174933-11219
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (66.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:269: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:289: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-20220125174933-11219 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-20220125174933-11219 --alsologtostderr -v=1
E0125 17:52:29.966379   11219 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/darwin-amd64-docker--13326-10047-7d129a660e0abf125cce994bee2942d8ab6dd57f/.minikube/profiles/bridge-20220125165334-11219/client.crt: no such file or directory
start_stop_delete_test.go:296: (dbg) Done: out/minikube-darwin-amd64 pause -p newest-cni-20220125174933-11219 --alsologtostderr -v=1: (1.13710164s)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220125174933-11219 -n newest-cni-20220125174933-11219
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220125174933-11219 -n newest-cni-20220125174933-11219: exit status 2 (625.979261ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220125174933-11219 -n newest-cni-20220125174933-11219
start_stop_delete_test.go:296: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220125174933-11219 -n newest-cni-20220125174933-11219: exit status 2 (624.732682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:296: status error: exit status 2 (may be ok)
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-20220125174933-11219 --alsologtostderr -v=1
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-20220125174933-11219 -n newest-cni-20220125174933-11219
start_stop_delete_test.go:296: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-20220125174933-11219 -n newest-cni-20220125174933-11219
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.53s)

                                                
                                    

Test skip (20/281)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.2/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/cached-images
aaa_download_only_test.go:123: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.3-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.3-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.3-rc.0/binaries
aaa_download_only_test.go:142: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.3-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:281: registry stabilized in 16.878273ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-f5r72" [bdbadcdf-2b51-4f39-bdb2-1a0fb18b2b3f] Running
addons_test.go:283: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015307982s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:343: "registry-proxy-vqt85" [661b66eb-ca69-486e-b6cf-9b76ddde5dd1] Running
addons_test.go:286: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.01893501s
addons_test.go:291: (dbg) Run:  kubectl --context addons-20220125155914-11219 delete po -l run=registry-test --now
addons_test.go:296: (dbg) Run:  kubectl --context addons-20220125155914-11219 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:296: (dbg) Done: kubectl --context addons-20220125155914-11219 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.310922628s)
addons_test.go:306: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (13.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:163: (dbg) Run:  kubectl --context addons-20220125155914-11219 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Run:  kubectl --context addons-20220125155914-11219 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context addons-20220125155914-11219 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [26f5e7a8-70c5-412c-ba3b-d0bc9bfc8e85] Pending
helpers_test.go:343: "nginx" [26f5e7a8-70c5-412c-ba3b-d0bc9bfc8e85] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [26f5e7a8-70c5-412c-ba3b-d0bc9bfc8e85] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:201: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.014238528s
addons_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 -p addons-20220125155914-11219 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.73s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:449: Skipping Olm addon till images are fixed
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (10.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1431: (dbg) Run:  kubectl --context functional-20220125160520-11219 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1437: (dbg) Run:  kubectl --context functional-20220125160520-11219 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1442: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-54fbb85-xfcs6" [343cd4b3-ebb3-4662-b256-706041d3cf45] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-54fbb85-xfcs6" [343cd4b3-ebb3-4662-b256-706041d3cf45] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1442: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 10.01574703s
functional_test.go:1447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-20220125160520-11219 service list
functional_test.go:1456: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmd (10.99s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:98: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:35: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (46.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20220125161515-11219 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20220125161515-11219 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.469924465s)
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220125161515-11219 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220125161515-11219 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (216.583687ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.110.128.57:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220125161515-11219 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220125161515-11219 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (166.88508ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.110.128.57:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220125161515-11219 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220125161515-11219 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (161.109976ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.110.128.57:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220125161515-11219 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220125161515-11219 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (165.062524ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.110.128.57:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220125161515-11219 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:183: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-20220125161515-11219 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (10.158344855s)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: context deadline exceeded

                                                
                                                
** /stderr **
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20220125161515-11219 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20220125161515-11219 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [3b37704c-2547-4016-b4a4-8350d0fe71ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:343: "nginx" [3b37704c-2547-4016-b4a4-8350d0fe71ea] Running
addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.015100181s
addons_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-20220125161515-11219 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:233: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestIngressAddonLegacy/serial/ValidateIngressAddons (46.65s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:77: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:176: Cleaning up "flannel-20220125165334-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-20220125165334-11219
--- SKIP: TestNetworkPlugins/group/flannel (0.81s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20220125173828-11219" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-20220125173828-11219
--- SKIP: TestStartStop/group/disable-driver-mounts (0.84s)

                                                
                                    
Copied to clipboard